Package com.google.cloud.aiplatform.v1
Interface ModelOrBuilder
-
- All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder
,com.google.protobuf.MessageOrBuilder
- All Known Implementing Classes:
Model
,Model.Builder
public interface ModelOrBuilder extends com.google.protobuf.MessageOrBuilder
-
-
Method Summary
All Methods Instance Methods Abstract Methods Deprecated Methods Modifier and Type Method Description boolean
containsLabels(String key)
The labels with user-defined metadata to organize your Models.String
getArtifactUri()
Immutable.com.google.protobuf.ByteString
getArtifactUriBytes()
Immutable.ModelContainerSpec
getContainerSpec()
Input only.ModelContainerSpecOrBuilder
getContainerSpecOrBuilder()
Input only.com.google.protobuf.Timestamp
getCreateTime()
Output only.com.google.protobuf.TimestampOrBuilder
getCreateTimeOrBuilder()
Output only.DeployedModelRef
getDeployedModels(int index)
Output only.int
getDeployedModelsCount()
Output only.List<DeployedModelRef>
getDeployedModelsList()
Output only.DeployedModelRefOrBuilder
getDeployedModelsOrBuilder(int index)
Output only.List<? extends DeployedModelRefOrBuilder>
getDeployedModelsOrBuilderList()
Output only.String
getDescription()
The description of the Model.com.google.protobuf.ByteString
getDescriptionBytes()
The description of the Model.String
getDisplayName()
Required.com.google.protobuf.ByteString
getDisplayNameBytes()
Required.EncryptionSpec
getEncryptionSpec()
Customer-managed encryption key spec for a Model.EncryptionSpecOrBuilder
getEncryptionSpecOrBuilder()
Customer-managed encryption key spec for a Model.String
getEtag()
Used to perform consistent read-modify-write updates.com.google.protobuf.ByteString
getEtagBytes()
Used to perform consistent read-modify-write updates.ExplanationSpec
getExplanationSpec()
The default explanation specification for this Model.ExplanationSpecOrBuilder
getExplanationSpecOrBuilder()
The default explanation specification for this Model.Map<String,String>
getLabels()
Deprecated.int
getLabelsCount()
The labels with user-defined metadata to organize your Models.Map<String,String>
getLabelsMap()
The labels with user-defined metadata to organize your Models.String
getLabelsOrDefault(String key, String defaultValue)
The labels with user-defined metadata to organize your Models.String
getLabelsOrThrow(String key)
The labels with user-defined metadata to organize your Models.com.google.protobuf.Value
getMetadata()
Immutable.String
getMetadataArtifact()
Output only.com.google.protobuf.ByteString
getMetadataArtifactBytes()
Output only.com.google.protobuf.ValueOrBuilder
getMetadataOrBuilder()
Immutable.String
getMetadataSchemaUri()
Immutable.com.google.protobuf.ByteString
getMetadataSchemaUriBytes()
Immutable.ModelSourceInfo
getModelSourceInfo()
Output only.ModelSourceInfoOrBuilder
getModelSourceInfoOrBuilder()
Output only.String
getName()
The resource name of the Model.com.google.protobuf.ByteString
getNameBytes()
The resource name of the Model.Model.OriginalModelInfo
getOriginalModelInfo()
Output only.Model.OriginalModelInfoOrBuilder
getOriginalModelInfoOrBuilder()
Output only.String
getPipelineJob()
Optional.com.google.protobuf.ByteString
getPipelineJobBytes()
Optional.PredictSchemata
getPredictSchemata()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].PredictSchemataOrBuilder
getPredictSchemataOrBuilder()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].Model.DeploymentResourcesType
getSupportedDeploymentResourcesTypes(int index)
Output only.int
getSupportedDeploymentResourcesTypesCount()
Output only.List<Model.DeploymentResourcesType>
getSupportedDeploymentResourcesTypesList()
Output only.int
getSupportedDeploymentResourcesTypesValue(int index)
Output only.List<Integer>
getSupportedDeploymentResourcesTypesValueList()
Output only.Model.ExportFormat
getSupportedExportFormats(int index)
Output only.int
getSupportedExportFormatsCount()
Output only.List<Model.ExportFormat>
getSupportedExportFormatsList()
Output only.Model.ExportFormatOrBuilder
getSupportedExportFormatsOrBuilder(int index)
Output only.List<? extends Model.ExportFormatOrBuilder>
getSupportedExportFormatsOrBuilderList()
Output only.String
getSupportedInputStorageFormats(int index)
Output only.com.google.protobuf.ByteString
getSupportedInputStorageFormatsBytes(int index)
Output only.int
getSupportedInputStorageFormatsCount()
Output only.List<String>
getSupportedInputStorageFormatsList()
Output only.String
getSupportedOutputStorageFormats(int index)
Output only.com.google.protobuf.ByteString
getSupportedOutputStorageFormatsBytes(int index)
Output only.int
getSupportedOutputStorageFormatsCount()
Output only.List<String>
getSupportedOutputStorageFormatsList()
Output only.String
getTrainingPipeline()
Output only.com.google.protobuf.ByteString
getTrainingPipelineBytes()
Output only.com.google.protobuf.Timestamp
getUpdateTime()
Output only.com.google.protobuf.TimestampOrBuilder
getUpdateTimeOrBuilder()
Output only.String
getVersionAliases(int index)
User provided version aliases so that a model version can be referenced via alias (i.e.com.google.protobuf.ByteString
getVersionAliasesBytes(int index)
User provided version aliases so that a model version can be referenced via alias (i.e.int
getVersionAliasesCount()
User provided version aliases so that a model version can be referenced via alias (i.e.List<String>
getVersionAliasesList()
User provided version aliases so that a model version can be referenced via alias (i.e.com.google.protobuf.Timestamp
getVersionCreateTime()
Output only.com.google.protobuf.TimestampOrBuilder
getVersionCreateTimeOrBuilder()
Output only.String
getVersionDescription()
The description of this version.com.google.protobuf.ByteString
getVersionDescriptionBytes()
The description of this version.String
getVersionId()
Output only.com.google.protobuf.ByteString
getVersionIdBytes()
Output only.com.google.protobuf.Timestamp
getVersionUpdateTime()
Output only.com.google.protobuf.TimestampOrBuilder
getVersionUpdateTimeOrBuilder()
Output only.boolean
hasContainerSpec()
Input only.boolean
hasCreateTime()
Output only.boolean
hasEncryptionSpec()
Customer-managed encryption key spec for a Model.boolean
hasExplanationSpec()
The default explanation specification for this Model.boolean
hasMetadata()
Immutable.boolean
hasModelSourceInfo()
Output only.boolean
hasOriginalModelInfo()
Output only.boolean
hasPredictSchemata()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].boolean
hasUpdateTime()
Output only.boolean
hasVersionCreateTime()
Output only.boolean
hasVersionUpdateTime()
Output only.-
Methods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
-
-
-
Method Detail
-
getName
String getName()
The resource name of the Model.
string name = 1;
- Returns:
- The name.
-
getNameBytes
com.google.protobuf.ByteString getNameBytes()
The resource name of the Model.
string name = 1;
- Returns:
- The bytes for name.
-
getVersionId
String getVersionId()
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
string version_id = 28 [(.google.api.field_behavior) = IMMUTABLE, (.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The versionId.
-
getVersionIdBytes
com.google.protobuf.ByteString getVersionIdBytes()
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
string version_id = 28 [(.google.api.field_behavior) = IMMUTABLE, (.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The bytes for versionId.
-
getVersionAliasesList
List<String> getVersionAliasesList()
User provided version aliases so that a model version can be referenced via alias (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_alias}` instead of auto-generated version id (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_id})`. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
repeated string version_aliases = 29;
- Returns:
- A list containing the versionAliases.
-
getVersionAliasesCount
int getVersionAliasesCount()
User provided version aliases so that a model version can be referenced via alias (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_alias}` instead of auto-generated version id (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_id})`. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
repeated string version_aliases = 29;
- Returns:
- The count of versionAliases.
-
getVersionAliases
String getVersionAliases(int index)
User provided version aliases so that a model version can be referenced via alias (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_alias}` instead of auto-generated version id (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_id})`. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
repeated string version_aliases = 29;
- Parameters:
index
- The index of the element to return.- Returns:
- The versionAliases at the given index.
-
getVersionAliasesBytes
com.google.protobuf.ByteString getVersionAliasesBytes(int index)
User provided version aliases so that a model version can be referenced via alias (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_alias}` instead of auto-generated version id (i.e. `projects/{project}/locations/{location}/models/{model_id}@{version_id})`. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
repeated string version_aliases = 29;
- Parameters:
index
- The index of the value to return.- Returns:
- The bytes of the versionAliases at the given index.
-
hasVersionCreateTime
boolean hasVersionCreateTime()
Output only. Timestamp when this version was created.
.google.protobuf.Timestamp version_create_time = 31 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the versionCreateTime field is set.
-
getVersionCreateTime
com.google.protobuf.Timestamp getVersionCreateTime()
Output only. Timestamp when this version was created.
.google.protobuf.Timestamp version_create_time = 31 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The versionCreateTime.
-
getVersionCreateTimeOrBuilder
com.google.protobuf.TimestampOrBuilder getVersionCreateTimeOrBuilder()
Output only. Timestamp when this version was created.
.google.protobuf.Timestamp version_create_time = 31 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
hasVersionUpdateTime
boolean hasVersionUpdateTime()
Output only. Timestamp when this version was most recently updated.
.google.protobuf.Timestamp version_update_time = 32 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the versionUpdateTime field is set.
-
getVersionUpdateTime
com.google.protobuf.Timestamp getVersionUpdateTime()
Output only. Timestamp when this version was most recently updated.
.google.protobuf.Timestamp version_update_time = 32 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The versionUpdateTime.
-
getVersionUpdateTimeOrBuilder
com.google.protobuf.TimestampOrBuilder getVersionUpdateTimeOrBuilder()
Output only. Timestamp when this version was most recently updated.
.google.protobuf.Timestamp version_update_time = 32 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDisplayName
String getDisplayName()
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.
string display_name = 2 [(.google.api.field_behavior) = REQUIRED];
- Returns:
- The displayName.
-
getDisplayNameBytes
com.google.protobuf.ByteString getDisplayNameBytes()
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.
string display_name = 2 [(.google.api.field_behavior) = REQUIRED];
- Returns:
- The bytes for displayName.
-
getDescription
String getDescription()
The description of the Model.
string description = 3;
- Returns:
- The description.
-
getDescriptionBytes
com.google.protobuf.ByteString getDescriptionBytes()
The description of the Model.
string description = 3;
- Returns:
- The bytes for description.
-
getVersionDescription
String getVersionDescription()
The description of this version.
string version_description = 30;
- Returns:
- The versionDescription.
-
getVersionDescriptionBytes
com.google.protobuf.ByteString getVersionDescriptionBytes()
The description of this version.
string version_description = 30;
- Returns:
- The bytes for versionDescription.
-
hasPredictSchemata
boolean hasPredictSchemata()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;
- Returns:
- Whether the predictSchemata field is set.
-
getPredictSchemata
PredictSchemata getPredictSchemata()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;
- Returns:
- The predictSchemata.
-
getPredictSchemataOrBuilder
PredictSchemataOrBuilder getPredictSchemataOrBuilder()
The schemata that describe formats of the Model's predictions and explanations as given and returned via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] and [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
.google.cloud.aiplatform.v1.PredictSchemata predict_schemata = 4;
-
getMetadataSchemaUri
String getMetadataSchemaUri()
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
string metadata_schema_uri = 5 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- The metadataSchemaUri.
-
getMetadataSchemaUriBytes
com.google.protobuf.ByteString getMetadataSchemaUriBytes()
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
string metadata_schema_uri = 5 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- The bytes for metadataSchemaUri.
-
hasMetadata
boolean hasMetadata()
Immutable. An additional information about the Model; the schema of the metadata can be found in [metadata_schema][google.cloud.aiplatform.v1.Model.metadata_schema_uri]. Unset if the Model does not have any additional information.
.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- Whether the metadata field is set.
-
getMetadata
com.google.protobuf.Value getMetadata()
Immutable. An additional information about the Model; the schema of the metadata can be found in [metadata_schema][google.cloud.aiplatform.v1.Model.metadata_schema_uri]. Unset if the Model does not have any additional information.
.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- The metadata.
-
getMetadataOrBuilder
com.google.protobuf.ValueOrBuilder getMetadataOrBuilder()
Immutable. An additional information about the Model; the schema of the metadata can be found in [metadata_schema][google.cloud.aiplatform.v1.Model.metadata_schema_uri]. Unset if the Model does not have any additional information.
.google.protobuf.Value metadata = 6 [(.google.api.field_behavior) = IMMUTABLE];
-
getSupportedExportFormatsList
List<Model.ExportFormat> getSupportedExportFormatsList()
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getSupportedExportFormats
Model.ExportFormat getSupportedExportFormats(int index)
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getSupportedExportFormatsCount
int getSupportedExportFormatsCount()
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getSupportedExportFormatsOrBuilderList
List<? extends Model.ExportFormatOrBuilder> getSupportedExportFormatsOrBuilderList()
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getSupportedExportFormatsOrBuilder
Model.ExportFormatOrBuilder getSupportedExportFormatsOrBuilder(int index)
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
repeated .google.cloud.aiplatform.v1.Model.ExportFormat supported_export_formats = 20 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getTrainingPipeline
String getTrainingPipeline()
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
string training_pipeline = 7 [(.google.api.field_behavior) = OUTPUT_ONLY, (.google.api.resource_reference) = { ... }
- Returns:
- The trainingPipeline.
-
getTrainingPipelineBytes
com.google.protobuf.ByteString getTrainingPipelineBytes()
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
string training_pipeline = 7 [(.google.api.field_behavior) = OUTPUT_ONLY, (.google.api.resource_reference) = { ... }
- Returns:
- The bytes for trainingPipeline.
-
getPipelineJob
String getPipelineJob()
Optional. This field is populated if the model is produced by a pipeline job.
string pipeline_job = 47 [(.google.api.field_behavior) = OPTIONAL, (.google.api.resource_reference) = { ... }
- Returns:
- The pipelineJob.
-
getPipelineJobBytes
com.google.protobuf.ByteString getPipelineJobBytes()
Optional. This field is populated if the model is produced by a pipeline job.
string pipeline_job = 47 [(.google.api.field_behavior) = OPTIONAL, (.google.api.resource_reference) = { ... }
- Returns:
- The bytes for pipelineJob.
-
hasContainerSpec
boolean hasContainerSpec()
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel], and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models or Large Models.
.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];
- Returns:
- Whether the containerSpec field is set.
-
getContainerSpec
ModelContainerSpec getContainerSpec()
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel], and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models or Large Models.
.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];
- Returns:
- The containerSpec.
-
getContainerSpecOrBuilder
ModelContainerSpecOrBuilder getContainerSpecOrBuilder()
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel], and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models or Large Models.
.google.cloud.aiplatform.v1.ModelContainerSpec container_spec = 9 [(.google.api.field_behavior) = INPUT_ONLY];
-
getArtifactUri
String getArtifactUri()
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models.
string artifact_uri = 26 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- The artifactUri.
-
getArtifactUriBytes
com.google.protobuf.ByteString getArtifactUriBytes()
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models.
string artifact_uri = 26 [(.google.api.field_behavior) = IMMUTABLE];
- Returns:
- The bytes for artifactUri.
-
getSupportedDeploymentResourcesTypesList
List<Model.DeploymentResourcesType> getSupportedDeploymentResourcesTypesList()
Output only. When this Model is deployed, its prediction resources are described by the `prediction_resources` field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1.Endpoint.deployed_models] object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an [Endpoint][google.cloud.aiplatform.v1.Endpoint] and does not support online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain]). Such a Model can serve predictions by using a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob], if it has at least one entry each in [supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] and [supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats].
repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- A list containing the supportedDeploymentResourcesTypes.
-
getSupportedDeploymentResourcesTypesCount
int getSupportedDeploymentResourcesTypesCount()
Output only. When this Model is deployed, its prediction resources are described by the `prediction_resources` field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1.Endpoint.deployed_models] object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an [Endpoint][google.cloud.aiplatform.v1.Endpoint] and does not support online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain]). Such a Model can serve predictions by using a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob], if it has at least one entry each in [supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] and [supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats].
repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The count of supportedDeploymentResourcesTypes.
-
getSupportedDeploymentResourcesTypes
Model.DeploymentResourcesType getSupportedDeploymentResourcesTypes(int index)
Output only. When this Model is deployed, its prediction resources are described by the `prediction_resources` field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1.Endpoint.deployed_models] object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an [Endpoint][google.cloud.aiplatform.v1.Endpoint] and does not support online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain]). Such a Model can serve predictions by using a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob], if it has at least one entry each in [supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] and [supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats].
repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the element to return.- Returns:
- The supportedDeploymentResourcesTypes at the given index.
-
getSupportedDeploymentResourcesTypesValueList
List<Integer> getSupportedDeploymentResourcesTypesValueList()
Output only. When this Model is deployed, its prediction resources are described by the `prediction_resources` field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1.Endpoint.deployed_models] object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an [Endpoint][google.cloud.aiplatform.v1.Endpoint] and does not support online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain]). Such a Model can serve predictions by using a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob], if it has at least one entry each in [supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] and [supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats].
repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- A list containing the enum numeric values on the wire for supportedDeploymentResourcesTypes.
-
getSupportedDeploymentResourcesTypesValue
int getSupportedDeploymentResourcesTypesValue(int index)
Output only. When this Model is deployed, its prediction resources are described by the `prediction_resources` field of the [Endpoint.deployed_models][google.cloud.aiplatform.v1.Endpoint.deployed_models] object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an [Endpoint][google.cloud.aiplatform.v1.Endpoint] and does not support online predictions ([PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain]). Such a Model can serve predictions by using a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob], if it has at least one entry each in [supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] and [supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats].
repeated .google.cloud.aiplatform.v1.Model.DeploymentResourcesType supported_deployment_resources_types = 10 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the value to return.- Returns:
- The enum numeric value on the wire of supportedDeploymentResourcesTypes at the given index.
-
getSupportedInputStorageFormatsList
List<String> getSupportedInputStorageFormatsList()
Output only. The formats this Model supports in [BatchPredictionJob.input_config][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] exists, the instances should be given as per that schema. The possible formats are: * `jsonl` The JSON Lines format, where each instance is a single line. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `csv` The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record` The TFRecord format, where each instance is a single record in tfrecord syntax. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record-gzip` Similar to `tf-record`, but the file is gzipped. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `bigquery` Each instance is a single row in BigQuery. Uses [BigQuerySource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.bigquery_source]. * `file-list` Each line of the file is the location of an instance to process, uses `gcs_source` field of the [InputConfig][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig] object. If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- A list containing the supportedInputStorageFormats.
-
getSupportedInputStorageFormatsCount
int getSupportedInputStorageFormatsCount()
Output only. The formats this Model supports in [BatchPredictionJob.input_config][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] exists, the instances should be given as per that schema. The possible formats are: * `jsonl` The JSON Lines format, where each instance is a single line. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `csv` The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record` The TFRecord format, where each instance is a single record in tfrecord syntax. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record-gzip` Similar to `tf-record`, but the file is gzipped. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `bigquery` Each instance is a single row in BigQuery. Uses [BigQuerySource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.bigquery_source]. * `file-list` Each line of the file is the location of an instance to process, uses `gcs_source` field of the [InputConfig][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig] object. If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The count of supportedInputStorageFormats.
-
getSupportedInputStorageFormats
String getSupportedInputStorageFormats(int index)
Output only. The formats this Model supports in [BatchPredictionJob.input_config][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] exists, the instances should be given as per that schema. The possible formats are: * `jsonl` The JSON Lines format, where each instance is a single line. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `csv` The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record` The TFRecord format, where each instance is a single record in tfrecord syntax. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record-gzip` Similar to `tf-record`, but the file is gzipped. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `bigquery` Each instance is a single row in BigQuery. Uses [BigQuerySource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.bigquery_source]. * `file-list` Each line of the file is the location of an instance to process, uses `gcs_source` field of the [InputConfig][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig] object. If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the element to return.- Returns:
- The supportedInputStorageFormats at the given index.
-
getSupportedInputStorageFormatsBytes
com.google.protobuf.ByteString getSupportedInputStorageFormatsBytes(int index)
Output only. The formats this Model supports in [BatchPredictionJob.input_config][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] exists, the instances should be given as per that schema. The possible formats are: * `jsonl` The JSON Lines format, where each instance is a single line. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `csv` The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record` The TFRecord format, where each instance is a single record in tfrecord syntax. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `tf-record-gzip` Similar to `tf-record`, but the file is gzipped. Uses [GcsSource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.gcs_source]. * `bigquery` Each instance is a single row in BigQuery. Uses [BigQuerySource][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.bigquery_source]. * `file-list` Each line of the file is the location of an instance to process, uses `gcs_source` field of the [InputConfig][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig] object. If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_input_storage_formats = 11 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the value to return.- Returns:
- The bytes of the supportedInputStorageFormats at the given index.
-
getSupportedOutputStorageFormatsList
List<String> getSupportedOutputStorageFormatsList()
Output only. The formats this Model supports in [BatchPredictionJob.output_config][google.cloud.aiplatform.v1.BatchPredictionJob.output_config]. If both [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] and [PredictSchemata.prediction_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.prediction_schema_uri] exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * `jsonl` The JSON Lines format, where each prediction is a single line. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `csv` The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `bigquery` Each prediction is a single row in a BigQuery table, uses [BigQueryDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.bigquery_destination] . If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- A list containing the supportedOutputStorageFormats.
-
getSupportedOutputStorageFormatsCount
int getSupportedOutputStorageFormatsCount()
Output only. The formats this Model supports in [BatchPredictionJob.output_config][google.cloud.aiplatform.v1.BatchPredictionJob.output_config]. If both [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] and [PredictSchemata.prediction_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.prediction_schema_uri] exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * `jsonl` The JSON Lines format, where each prediction is a single line. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `csv` The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `bigquery` Each prediction is a single row in a BigQuery table, uses [BigQueryDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.bigquery_destination] . If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The count of supportedOutputStorageFormats.
-
getSupportedOutputStorageFormats
String getSupportedOutputStorageFormats(int index)
Output only. The formats this Model supports in [BatchPredictionJob.output_config][google.cloud.aiplatform.v1.BatchPredictionJob.output_config]. If both [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] and [PredictSchemata.prediction_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.prediction_schema_uri] exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * `jsonl` The JSON Lines format, where each prediction is a single line. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `csv` The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `bigquery` Each prediction is a single row in a BigQuery table, uses [BigQueryDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.bigquery_destination] . If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the element to return.- Returns:
- The supportedOutputStorageFormats at the given index.
-
getSupportedOutputStorageFormatsBytes
com.google.protobuf.ByteString getSupportedOutputStorageFormatsBytes(int index)
Output only. The formats this Model supports in [BatchPredictionJob.output_config][google.cloud.aiplatform.v1.BatchPredictionJob.output_config]. If both [PredictSchemata.instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri] and [PredictSchemata.prediction_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.prediction_schema_uri] exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * `jsonl` The JSON Lines format, where each prediction is a single line. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `csv` The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses [GcsDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.gcs_destination]. * `bigquery` Each prediction is a single row in a BigQuery table, uses [BigQueryDestination][google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig.bigquery_destination] . If this Model doesn't support any of these formats it means it cannot be used with a [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. However, if it has [supported_deployment_resources_types][google.cloud.aiplatform.v1.Model.supported_deployment_resources_types], it could serve online predictions by using [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict] or [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain].
repeated string supported_output_storage_formats = 12 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Parameters:
index
- The index of the value to return.- Returns:
- The bytes of the supportedOutputStorageFormats at the given index.
-
hasCreateTime
boolean hasCreateTime()
Output only. Timestamp when this Model was uploaded into Vertex AI.
.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the createTime field is set.
-
getCreateTime
com.google.protobuf.Timestamp getCreateTime()
Output only. Timestamp when this Model was uploaded into Vertex AI.
.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The createTime.
-
getCreateTimeOrBuilder
com.google.protobuf.TimestampOrBuilder getCreateTimeOrBuilder()
Output only. Timestamp when this Model was uploaded into Vertex AI.
.google.protobuf.Timestamp create_time = 13 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
hasUpdateTime
boolean hasUpdateTime()
Output only. Timestamp when this Model was most recently updated.
.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the updateTime field is set.
-
getUpdateTime
com.google.protobuf.Timestamp getUpdateTime()
Output only. Timestamp when this Model was most recently updated.
.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The updateTime.
-
getUpdateTimeOrBuilder
com.google.protobuf.TimestampOrBuilder getUpdateTimeOrBuilder()
Output only. Timestamp when this Model was most recently updated.
.google.protobuf.Timestamp update_time = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDeployedModelsList
List<DeployedModelRef> getDeployedModelsList()
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDeployedModels
DeployedModelRef getDeployedModels(int index)
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDeployedModelsCount
int getDeployedModelsCount()
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDeployedModelsOrBuilderList
List<? extends DeployedModelRefOrBuilder> getDeployedModelsOrBuilderList()
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getDeployedModelsOrBuilder
DeployedModelRefOrBuilder getDeployedModelsOrBuilder(int index)
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
repeated .google.cloud.aiplatform.v1.DeployedModelRef deployed_models = 15 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
hasExplanationSpec
boolean hasExplanationSpec()
The default explanation specification for this Model. The Model can be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] after being [deployed][google.cloud.aiplatform.v1.EndpointService.DeployModel] if it is populated. The Model can be used for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] if it is populated. All fields of the explanation_spec can be overridden by [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model], or [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] by setting [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model] and for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] by setting [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob].
.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;
- Returns:
- Whether the explanationSpec field is set.
-
getExplanationSpec
ExplanationSpec getExplanationSpec()
The default explanation specification for this Model. The Model can be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] after being [deployed][google.cloud.aiplatform.v1.EndpointService.DeployModel] if it is populated. The Model can be used for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] if it is populated. All fields of the explanation_spec can be overridden by [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model], or [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] by setting [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model] and for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] by setting [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob].
.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;
- Returns:
- The explanationSpec.
-
getExplanationSpecOrBuilder
ExplanationSpecOrBuilder getExplanationSpecOrBuilder()
The default explanation specification for this Model. The Model can be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] after being [deployed][google.cloud.aiplatform.v1.EndpointService.DeployModel] if it is populated. The Model can be used for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] if it is populated. All fields of the explanation_spec can be overridden by [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model], or [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][google.cloud.aiplatform.v1.PredictionService.Explain] by setting [explanation_spec][google.cloud.aiplatform.v1.DeployedModel.explanation_spec] of [DeployModelRequest.deployed_model][google.cloud.aiplatform.v1.DeployModelRequest.deployed_model] and for [batch explanation][google.cloud.aiplatform.v1.BatchPredictionJob.generate_explanation] by setting [explanation_spec][google.cloud.aiplatform.v1.BatchPredictionJob.explanation_spec] of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob].
.google.cloud.aiplatform.v1.ExplanationSpec explanation_spec = 23;
-
getEtag
String getEtag()
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
string etag = 16;
- Returns:
- The etag.
-
getEtagBytes
com.google.protobuf.ByteString getEtagBytes()
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
string etag = 16;
- Returns:
- The bytes for etag.
-
getLabelsCount
int getLabelsCount()
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
map<string, string> labels = 17;
-
containsLabels
boolean containsLabels(String key)
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
map<string, string> labels = 17;
-
getLabels
@Deprecated Map<String,String> getLabels()
Deprecated.UsegetLabelsMap()
instead.
-
getLabelsMap
Map<String,String> getLabelsMap()
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
map<string, string> labels = 17;
-
getLabelsOrDefault
String getLabelsOrDefault(String key, String defaultValue)
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
map<string, string> labels = 17;
-
getLabelsOrThrow
String getLabelsOrThrow(String key)
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
map<string, string> labels = 17;
-
hasEncryptionSpec
boolean hasEncryptionSpec()
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;
- Returns:
- Whether the encryptionSpec field is set.
-
getEncryptionSpec
EncryptionSpec getEncryptionSpec()
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;
- Returns:
- The encryptionSpec.
-
getEncryptionSpecOrBuilder
EncryptionSpecOrBuilder getEncryptionSpecOrBuilder()
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
.google.cloud.aiplatform.v1.EncryptionSpec encryption_spec = 24;
-
hasModelSourceInfo
boolean hasModelSourceInfo()
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.
.google.cloud.aiplatform.v1.ModelSourceInfo model_source_info = 38 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the modelSourceInfo field is set.
-
getModelSourceInfo
ModelSourceInfo getModelSourceInfo()
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.
.google.cloud.aiplatform.v1.ModelSourceInfo model_source_info = 38 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The modelSourceInfo.
-
getModelSourceInfoOrBuilder
ModelSourceInfoOrBuilder getModelSourceInfoOrBuilder()
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model.
.google.cloud.aiplatform.v1.ModelSourceInfo model_source_info = 38 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
hasOriginalModelInfo
boolean hasOriginalModelInfo()
Output only. If this Model is a copy of another Model, this contains info about the original.
.google.cloud.aiplatform.v1.Model.OriginalModelInfo original_model_info = 34 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- Whether the originalModelInfo field is set.
-
getOriginalModelInfo
Model.OriginalModelInfo getOriginalModelInfo()
Output only. If this Model is a copy of another Model, this contains info about the original.
.google.cloud.aiplatform.v1.Model.OriginalModelInfo original_model_info = 34 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The originalModelInfo.
-
getOriginalModelInfoOrBuilder
Model.OriginalModelInfoOrBuilder getOriginalModelInfoOrBuilder()
Output only. If this Model is a copy of another Model, this contains info about the original.
.google.cloud.aiplatform.v1.Model.OriginalModelInfo original_model_info = 34 [(.google.api.field_behavior) = OUTPUT_ONLY];
-
getMetadataArtifact
String getMetadataArtifact()
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is `projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}`.
string metadata_artifact = 44 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The metadataArtifact.
-
getMetadataArtifactBytes
com.google.protobuf.ByteString getMetadataArtifactBytes()
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is `projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}`.
string metadata_artifact = 44 [(.google.api.field_behavior) = OUTPUT_ONLY];
- Returns:
- The bytes for metadataArtifact.
-
-