Interface DedicatedResourcesOrBuilder

  • All Superinterfaces:
    com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
    All Known Implementing Classes:
    DedicatedResources, DedicatedResources.Builder

    public interface DedicatedResourcesOrBuilder
    extends com.google.protobuf.MessageOrBuilder
    • Method Detail

      • hasMachineSpec

        boolean hasMachineSpec()
         Required. Immutable. The specification of a single machine used by the
         prediction.
         
        .google.cloud.aiplatform.v1beta1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
        Returns:
        Whether the machineSpec field is set.
      • getMachineSpec

        MachineSpec getMachineSpec()
         Required. Immutable. The specification of a single machine used by the
         prediction.
         
        .google.cloud.aiplatform.v1beta1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
        Returns:
        The machineSpec.
      • getMachineSpecOrBuilder

        MachineSpecOrBuilder getMachineSpecOrBuilder()
         Required. Immutable. The specification of a single machine used by the
         prediction.
         
        .google.cloud.aiplatform.v1beta1.MachineSpec machine_spec = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
      • getMinReplicaCount

        int getMinReplicaCount()
         Required. Immutable. The minimum number of machine replicas this
         DeployedModel will be always deployed on. This value must be greater than
         or equal to 1.
        
         If traffic against the DeployedModel increases, it may dynamically be
         deployed onto more replicas, and as traffic decreases, some of these extra
         replicas may be freed.
         
        int32 min_replica_count = 2 [(.google.api.field_behavior) = REQUIRED, (.google.api.field_behavior) = IMMUTABLE];
        Returns:
        The minReplicaCount.
      • getMaxReplicaCount

        int getMaxReplicaCount()
         Immutable. The maximum number of replicas this DeployedModel may be
         deployed on when the traffic against it increases. If the requested value
         is too large, the deployment will error, but if deployment succeeds then
         the ability to scale the model to that many replicas is guaranteed (barring
         service outages). If traffic against the DeployedModel increases beyond
         what its replicas at maximum may handle, a portion of the traffic will be
         dropped. If this value is not provided, will use
         [min_replica_count][google.cloud.aiplatform.v1beta1.DedicatedResources.min_replica_count]
         as the default value.
        
         The value of this field impacts the charge against Vertex CPU and GPU
         quotas. Specifically, you will be charged for (max_replica_count *
         number of cores in the selected machine type) and (max_replica_count *
         number of GPUs per replica in the selected machine type).
         
        int32 max_replica_count = 3 [(.google.api.field_behavior) = IMMUTABLE];
        Returns:
        The maxReplicaCount.
      • getAutoscalingMetricSpecsList

        List<AutoscalingMetricSpec> getAutoscalingMetricSpecsList()
         Immutable. The metric specifications that overrides a resource
         utilization metric (CPU utilization, accelerator's duty cycle, and so on)
         target value (default to 60 if not set). At most one entry is allowed per
         metric.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is above 0, the autoscaling will be based on both CPU utilization and
         accelerator's duty cycle metrics and scale up when either metrics exceeds
         its target value while scale down if both metrics are under their target
         value. The default target value is 60 for both metrics.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is 0, the autoscaling will be based on CPU utilization metric only with
         default target value 60 if not explicitly set.
        
         For example, in the case of Online Prediction, if you want to override
         target CPU utilization to 80, you should set
         [autoscaling_metric_specs.metric_name][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.metric_name]
         to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and
         [autoscaling_metric_specs.target][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.target]
         to `80`.
         
        repeated .google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
      • getAutoscalingMetricSpecs

        AutoscalingMetricSpec getAutoscalingMetricSpecs​(int index)
         Immutable. The metric specifications that overrides a resource
         utilization metric (CPU utilization, accelerator's duty cycle, and so on)
         target value (default to 60 if not set). At most one entry is allowed per
         metric.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is above 0, the autoscaling will be based on both CPU utilization and
         accelerator's duty cycle metrics and scale up when either metrics exceeds
         its target value while scale down if both metrics are under their target
         value. The default target value is 60 for both metrics.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is 0, the autoscaling will be based on CPU utilization metric only with
         default target value 60 if not explicitly set.
        
         For example, in the case of Online Prediction, if you want to override
         target CPU utilization to 80, you should set
         [autoscaling_metric_specs.metric_name][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.metric_name]
         to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and
         [autoscaling_metric_specs.target][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.target]
         to `80`.
         
        repeated .google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
      • getAutoscalingMetricSpecsCount

        int getAutoscalingMetricSpecsCount()
         Immutable. The metric specifications that overrides a resource
         utilization metric (CPU utilization, accelerator's duty cycle, and so on)
         target value (default to 60 if not set). At most one entry is allowed per
         metric.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is above 0, the autoscaling will be based on both CPU utilization and
         accelerator's duty cycle metrics and scale up when either metrics exceeds
         its target value while scale down if both metrics are under their target
         value. The default target value is 60 for both metrics.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is 0, the autoscaling will be based on CPU utilization metric only with
         default target value 60 if not explicitly set.
        
         For example, in the case of Online Prediction, if you want to override
         target CPU utilization to 80, you should set
         [autoscaling_metric_specs.metric_name][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.metric_name]
         to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and
         [autoscaling_metric_specs.target][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.target]
         to `80`.
         
        repeated .google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
      • getAutoscalingMetricSpecsOrBuilderList

        List<? extends AutoscalingMetricSpecOrBuilder> getAutoscalingMetricSpecsOrBuilderList()
         Immutable. The metric specifications that overrides a resource
         utilization metric (CPU utilization, accelerator's duty cycle, and so on)
         target value (default to 60 if not set). At most one entry is allowed per
         metric.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is above 0, the autoscaling will be based on both CPU utilization and
         accelerator's duty cycle metrics and scale up when either metrics exceeds
         its target value while scale down if both metrics are under their target
         value. The default target value is 60 for both metrics.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is 0, the autoscaling will be based on CPU utilization metric only with
         default target value 60 if not explicitly set.
        
         For example, in the case of Online Prediction, if you want to override
         target CPU utilization to 80, you should set
         [autoscaling_metric_specs.metric_name][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.metric_name]
         to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and
         [autoscaling_metric_specs.target][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.target]
         to `80`.
         
        repeated .google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];
      • getAutoscalingMetricSpecsOrBuilder

        AutoscalingMetricSpecOrBuilder getAutoscalingMetricSpecsOrBuilder​(int index)
         Immutable. The metric specifications that overrides a resource
         utilization metric (CPU utilization, accelerator's duty cycle, and so on)
         target value (default to 60 if not set). At most one entry is allowed per
         metric.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is above 0, the autoscaling will be based on both CPU utilization and
         accelerator's duty cycle metrics and scale up when either metrics exceeds
         its target value while scale down if both metrics are under their target
         value. The default target value is 60 for both metrics.
        
         If
         [machine_spec.accelerator_count][google.cloud.aiplatform.v1beta1.MachineSpec.accelerator_count]
         is 0, the autoscaling will be based on CPU utilization metric only with
         default target value 60 if not explicitly set.
        
         For example, in the case of Online Prediction, if you want to override
         target CPU utilization to 80, you should set
         [autoscaling_metric_specs.metric_name][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.metric_name]
         to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and
         [autoscaling_metric_specs.target][google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec.target]
         to `80`.
         
        repeated .google.cloud.aiplatform.v1beta1.AutoscalingMetricSpec autoscaling_metric_specs = 4 [(.google.api.field_behavior) = IMMUTABLE];