Class ExplanationMetadata.InputMetadata.Builder

  • All Implemented Interfaces:
    ExplanationMetadata.InputMetadataOrBuilder, com.google.protobuf.Message.Builder, com.google.protobuf.MessageLite.Builder, com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder, Cloneable
    Enclosing class:
    ExplanationMetadata.InputMetadata

    public static final class ExplanationMetadata.InputMetadata.Builder
    extends com.google.protobuf.GeneratedMessageV3.Builder<ExplanationMetadata.InputMetadata.Builder>
    implements ExplanationMetadata.InputMetadataOrBuilder
     Metadata of the input of a feature.
    
     Fields other than
     [InputMetadata.input_baselines][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.input_baselines]
     are applicable only for Models that are using Vertex AI-provided images for
     Tensorflow.
     
    Protobuf type google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata
    • Method Detail

      • getDescriptor

        public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<ExplanationMetadata.InputMetadata.Builder>
      • getDescriptorForType

        public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface com.google.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface com.google.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<ExplanationMetadata.InputMetadata.Builder>
      • getDefaultInstanceForType

        public ExplanationMetadata.InputMetadata getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilder
      • build

        public ExplanationMetadata.InputMetadata build()
        Specified by:
        build in interface com.google.protobuf.Message.Builder
        Specified by:
        build in interface com.google.protobuf.MessageLite.Builder
      • buildPartial

        public ExplanationMetadata.InputMetadata buildPartial()
        Specified by:
        buildPartial in interface com.google.protobuf.Message.Builder
        Specified by:
        buildPartial in interface com.google.protobuf.MessageLite.Builder
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface com.google.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<ExplanationMetadata.InputMetadata.Builder>
      • getInputBaselinesList

        public List<com.google.protobuf.Value> getInputBaselinesList()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
        Specified by:
        getInputBaselinesList in interface ExplanationMetadata.InputMetadataOrBuilder
      • getInputBaselinesCount

        public int getInputBaselinesCount()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
        Specified by:
        getInputBaselinesCount in interface ExplanationMetadata.InputMetadataOrBuilder
      • getInputBaselines

        public com.google.protobuf.Value getInputBaselines​(int index)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
        Specified by:
        getInputBaselines in interface ExplanationMetadata.InputMetadataOrBuilder
      • setInputBaselines

        public ExplanationMetadata.InputMetadata.Builder setInputBaselines​(int index,
                                                                           com.google.protobuf.Value value)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • setInputBaselines

        public ExplanationMetadata.InputMetadata.Builder setInputBaselines​(int index,
                                                                           com.google.protobuf.Value.Builder builderForValue)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addInputBaselines

        public ExplanationMetadata.InputMetadata.Builder addInputBaselines​(com.google.protobuf.Value value)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addInputBaselines

        public ExplanationMetadata.InputMetadata.Builder addInputBaselines​(int index,
                                                                           com.google.protobuf.Value value)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addInputBaselines

        public ExplanationMetadata.InputMetadata.Builder addInputBaselines​(com.google.protobuf.Value.Builder builderForValue)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addInputBaselines

        public ExplanationMetadata.InputMetadata.Builder addInputBaselines​(int index,
                                                                           com.google.protobuf.Value.Builder builderForValue)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addAllInputBaselines

        public ExplanationMetadata.InputMetadata.Builder addAllInputBaselines​(Iterable<? extends com.google.protobuf.Value> values)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • clearInputBaselines

        public ExplanationMetadata.InputMetadata.Builder clearInputBaselines()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • removeInputBaselines

        public ExplanationMetadata.InputMetadata.Builder removeInputBaselines​(int index)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • getInputBaselinesBuilder

        public com.google.protobuf.Value.Builder getInputBaselinesBuilder​(int index)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • getInputBaselinesOrBuilder

        public com.google.protobuf.ValueOrBuilder getInputBaselinesOrBuilder​(int index)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
        Specified by:
        getInputBaselinesOrBuilder in interface ExplanationMetadata.InputMetadataOrBuilder
      • getInputBaselinesOrBuilderList

        public List<? extends com.google.protobuf.ValueOrBuilder> getInputBaselinesOrBuilderList()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
        Specified by:
        getInputBaselinesOrBuilderList in interface ExplanationMetadata.InputMetadataOrBuilder
      • addInputBaselinesBuilder

        public com.google.protobuf.Value.Builder addInputBaselinesBuilder()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • addInputBaselinesBuilder

        public com.google.protobuf.Value.Builder addInputBaselinesBuilder​(int index)
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • getInputBaselinesBuilderList

        public List<com.google.protobuf.Value.Builder> getInputBaselinesBuilderList()
         Baseline inputs for this feature.
        
         If no baseline is specified, Vertex AI chooses the baseline for this
         feature. If multiple baselines are specified, Vertex AI returns the
         average attributions across them in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
        
         For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape
         of each baseline must match the shape of the input tensor. If a scalar is
         provided, we broadcast to the same shape as the input tensor.
        
         For custom images, the element of the baselines must be in the same
         format as the feature's input in the
         [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The
         schema of any single instance may be specified via Endpoint's
         DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model]
         [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata]
         [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
         
        repeated .google.protobuf.Value input_baselines = 1;
      • getInputTensorNameBytes

        public com.google.protobuf.ByteString getInputTensorNameBytes()
         Name of the input tensor for this feature. Required and is only
         applicable to Vertex AI-provided images for Tensorflow.
         
        string input_tensor_name = 2;
        Specified by:
        getInputTensorNameBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for inputTensorName.
      • setInputTensorName

        public ExplanationMetadata.InputMetadata.Builder setInputTensorName​(String value)
         Name of the input tensor for this feature. Required and is only
         applicable to Vertex AI-provided images for Tensorflow.
         
        string input_tensor_name = 2;
        Parameters:
        value - The inputTensorName to set.
        Returns:
        This builder for chaining.
      • clearInputTensorName

        public ExplanationMetadata.InputMetadata.Builder clearInputTensorName()
         Name of the input tensor for this feature. Required and is only
         applicable to Vertex AI-provided images for Tensorflow.
         
        string input_tensor_name = 2;
        Returns:
        This builder for chaining.
      • setInputTensorNameBytes

        public ExplanationMetadata.InputMetadata.Builder setInputTensorNameBytes​(com.google.protobuf.ByteString value)
         Name of the input tensor for this feature. Required and is only
         applicable to Vertex AI-provided images for Tensorflow.
         
        string input_tensor_name = 2;
        Parameters:
        value - The bytes for inputTensorName to set.
        Returns:
        This builder for chaining.
      • getEncodingValue

        public int getEncodingValue()
         Defines how the feature is encoded into the input tensor. Defaults to
         IDENTITY.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.Encoding encoding = 3;
        Specified by:
        getEncodingValue in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The enum numeric value on the wire for encoding.
      • setEncodingValue

        public ExplanationMetadata.InputMetadata.Builder setEncodingValue​(int value)
         Defines how the feature is encoded into the input tensor. Defaults to
         IDENTITY.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.Encoding encoding = 3;
        Parameters:
        value - The enum numeric value on the wire for encoding to set.
        Returns:
        This builder for chaining.
      • clearEncoding

        public ExplanationMetadata.InputMetadata.Builder clearEncoding()
         Defines how the feature is encoded into the input tensor. Defaults to
         IDENTITY.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.Encoding encoding = 3;
        Returns:
        This builder for chaining.
      • getModalityBytes

        public com.google.protobuf.ByteString getModalityBytes()
         Modality of the feature. Valid values are: numeric, image. Defaults to
         numeric.
         
        string modality = 4;
        Specified by:
        getModalityBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for modality.
      • setModality

        public ExplanationMetadata.InputMetadata.Builder setModality​(String value)
         Modality of the feature. Valid values are: numeric, image. Defaults to
         numeric.
         
        string modality = 4;
        Parameters:
        value - The modality to set.
        Returns:
        This builder for chaining.
      • clearModality

        public ExplanationMetadata.InputMetadata.Builder clearModality()
         Modality of the feature. Valid values are: numeric, image. Defaults to
         numeric.
         
        string modality = 4;
        Returns:
        This builder for chaining.
      • setModalityBytes

        public ExplanationMetadata.InputMetadata.Builder setModalityBytes​(com.google.protobuf.ByteString value)
         Modality of the feature. Valid values are: numeric, image. Defaults to
         numeric.
         
        string modality = 4;
        Parameters:
        value - The bytes for modality to set.
        Returns:
        This builder for chaining.
      • hasFeatureValueDomain

        public boolean hasFeatureValueDomain()
         The domain details of the input feature value. Like min/max, original
         mean or standard deviation if normalized.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.FeatureValueDomain feature_value_domain = 5;
        Specified by:
        hasFeatureValueDomain in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        Whether the featureValueDomain field is set.
      • clearFeatureValueDomain

        public ExplanationMetadata.InputMetadata.Builder clearFeatureValueDomain()
         The domain details of the input feature value. Like min/max, original
         mean or standard deviation if normalized.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.FeatureValueDomain feature_value_domain = 5;
      • getFeatureValueDomainBuilder

        public ExplanationMetadata.InputMetadata.FeatureValueDomain.Builder getFeatureValueDomainBuilder()
         The domain details of the input feature value. Like min/max, original
         mean or standard deviation if normalized.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.FeatureValueDomain feature_value_domain = 5;
      • getIndicesTensorName

        public String getIndicesTensorName()
         Specifies the index of the values of the input tensor.
         Required when the input tensor is a sparse representation. Refer to
         Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string indices_tensor_name = 6;
        Specified by:
        getIndicesTensorName in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The indicesTensorName.
      • getIndicesTensorNameBytes

        public com.google.protobuf.ByteString getIndicesTensorNameBytes()
         Specifies the index of the values of the input tensor.
         Required when the input tensor is a sparse representation. Refer to
         Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string indices_tensor_name = 6;
        Specified by:
        getIndicesTensorNameBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for indicesTensorName.
      • setIndicesTensorName

        public ExplanationMetadata.InputMetadata.Builder setIndicesTensorName​(String value)
         Specifies the index of the values of the input tensor.
         Required when the input tensor is a sparse representation. Refer to
         Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string indices_tensor_name = 6;
        Parameters:
        value - The indicesTensorName to set.
        Returns:
        This builder for chaining.
      • clearIndicesTensorName

        public ExplanationMetadata.InputMetadata.Builder clearIndicesTensorName()
         Specifies the index of the values of the input tensor.
         Required when the input tensor is a sparse representation. Refer to
         Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string indices_tensor_name = 6;
        Returns:
        This builder for chaining.
      • setIndicesTensorNameBytes

        public ExplanationMetadata.InputMetadata.Builder setIndicesTensorNameBytes​(com.google.protobuf.ByteString value)
         Specifies the index of the values of the input tensor.
         Required when the input tensor is a sparse representation. Refer to
         Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string indices_tensor_name = 6;
        Parameters:
        value - The bytes for indicesTensorName to set.
        Returns:
        This builder for chaining.
      • getDenseShapeTensorName

        public String getDenseShapeTensorName()
         Specifies the shape of the values of the input if the input is a sparse
         representation. Refer to Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string dense_shape_tensor_name = 7;
        Specified by:
        getDenseShapeTensorName in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The denseShapeTensorName.
      • getDenseShapeTensorNameBytes

        public com.google.protobuf.ByteString getDenseShapeTensorNameBytes()
         Specifies the shape of the values of the input if the input is a sparse
         representation. Refer to Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string dense_shape_tensor_name = 7;
        Specified by:
        getDenseShapeTensorNameBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for denseShapeTensorName.
      • setDenseShapeTensorName

        public ExplanationMetadata.InputMetadata.Builder setDenseShapeTensorName​(String value)
         Specifies the shape of the values of the input if the input is a sparse
         representation. Refer to Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string dense_shape_tensor_name = 7;
        Parameters:
        value - The denseShapeTensorName to set.
        Returns:
        This builder for chaining.
      • clearDenseShapeTensorName

        public ExplanationMetadata.InputMetadata.Builder clearDenseShapeTensorName()
         Specifies the shape of the values of the input if the input is a sparse
         representation. Refer to Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string dense_shape_tensor_name = 7;
        Returns:
        This builder for chaining.
      • setDenseShapeTensorNameBytes

        public ExplanationMetadata.InputMetadata.Builder setDenseShapeTensorNameBytes​(com.google.protobuf.ByteString value)
         Specifies the shape of the values of the input if the input is a sparse
         representation. Refer to Tensorflow documentation for more details:
         https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
         
        string dense_shape_tensor_name = 7;
        Parameters:
        value - The bytes for denseShapeTensorName to set.
        Returns:
        This builder for chaining.
      • getIndexFeatureMappingList

        public com.google.protobuf.ProtocolStringList getIndexFeatureMappingList()
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Specified by:
        getIndexFeatureMappingList in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        A list containing the indexFeatureMapping.
      • getIndexFeatureMappingCount

        public int getIndexFeatureMappingCount()
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Specified by:
        getIndexFeatureMappingCount in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The count of indexFeatureMapping.
      • getIndexFeatureMapping

        public String getIndexFeatureMapping​(int index)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Specified by:
        getIndexFeatureMapping in interface ExplanationMetadata.InputMetadataOrBuilder
        Parameters:
        index - The index of the element to return.
        Returns:
        The indexFeatureMapping at the given index.
      • getIndexFeatureMappingBytes

        public com.google.protobuf.ByteString getIndexFeatureMappingBytes​(int index)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Specified by:
        getIndexFeatureMappingBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Parameters:
        index - The index of the value to return.
        Returns:
        The bytes of the indexFeatureMapping at the given index.
      • setIndexFeatureMapping

        public ExplanationMetadata.InputMetadata.Builder setIndexFeatureMapping​(int index,
                                                                                String value)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Parameters:
        index - The index to set the value at.
        value - The indexFeatureMapping to set.
        Returns:
        This builder for chaining.
      • addIndexFeatureMapping

        public ExplanationMetadata.InputMetadata.Builder addIndexFeatureMapping​(String value)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Parameters:
        value - The indexFeatureMapping to add.
        Returns:
        This builder for chaining.
      • addAllIndexFeatureMapping

        public ExplanationMetadata.InputMetadata.Builder addAllIndexFeatureMapping​(Iterable<String> values)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Parameters:
        values - The indexFeatureMapping to add.
        Returns:
        This builder for chaining.
      • clearIndexFeatureMapping

        public ExplanationMetadata.InputMetadata.Builder clearIndexFeatureMapping()
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Returns:
        This builder for chaining.
      • addIndexFeatureMappingBytes

        public ExplanationMetadata.InputMetadata.Builder addIndexFeatureMappingBytes​(com.google.protobuf.ByteString value)
         A list of feature names for each index in the input tensor.
         Required when the input
         [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding]
         is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
         
        repeated string index_feature_mapping = 8;
        Parameters:
        value - The bytes of the indexFeatureMapping to add.
        Returns:
        This builder for chaining.
      • getEncodedTensorName

        public String getEncodedTensorName()
         Encoded tensor is a transformation of the input tensor. Must be provided
         if choosing
         [Integrated Gradients
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]
         or [XRAI
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution]
         and the input tensor is not differentiable.
        
         An encoded tensor is generated if the input tensor is encoded by a lookup
         table.
         
        string encoded_tensor_name = 9;
        Specified by:
        getEncodedTensorName in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The encodedTensorName.
      • getEncodedTensorNameBytes

        public com.google.protobuf.ByteString getEncodedTensorNameBytes()
         Encoded tensor is a transformation of the input tensor. Must be provided
         if choosing
         [Integrated Gradients
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]
         or [XRAI
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution]
         and the input tensor is not differentiable.
        
         An encoded tensor is generated if the input tensor is encoded by a lookup
         table.
         
        string encoded_tensor_name = 9;
        Specified by:
        getEncodedTensorNameBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for encodedTensorName.
      • setEncodedTensorName

        public ExplanationMetadata.InputMetadata.Builder setEncodedTensorName​(String value)
         Encoded tensor is a transformation of the input tensor. Must be provided
         if choosing
         [Integrated Gradients
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]
         or [XRAI
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution]
         and the input tensor is not differentiable.
        
         An encoded tensor is generated if the input tensor is encoded by a lookup
         table.
         
        string encoded_tensor_name = 9;
        Parameters:
        value - The encodedTensorName to set.
        Returns:
        This builder for chaining.
      • clearEncodedTensorName

        public ExplanationMetadata.InputMetadata.Builder clearEncodedTensorName()
         Encoded tensor is a transformation of the input tensor. Must be provided
         if choosing
         [Integrated Gradients
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]
         or [XRAI
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution]
         and the input tensor is not differentiable.
        
         An encoded tensor is generated if the input tensor is encoded by a lookup
         table.
         
        string encoded_tensor_name = 9;
        Returns:
        This builder for chaining.
      • setEncodedTensorNameBytes

        public ExplanationMetadata.InputMetadata.Builder setEncodedTensorNameBytes​(com.google.protobuf.ByteString value)
         Encoded tensor is a transformation of the input tensor. Must be provided
         if choosing
         [Integrated Gradients
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]
         or [XRAI
         attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution]
         and the input tensor is not differentiable.
        
         An encoded tensor is generated if the input tensor is encoded by a lookup
         table.
         
        string encoded_tensor_name = 9;
        Parameters:
        value - The bytes for encodedTensorName to set.
        Returns:
        This builder for chaining.
      • getEncodedBaselinesList

        public List<com.google.protobuf.Value> getEncodedBaselinesList()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
        Specified by:
        getEncodedBaselinesList in interface ExplanationMetadata.InputMetadataOrBuilder
      • getEncodedBaselinesCount

        public int getEncodedBaselinesCount()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
        Specified by:
        getEncodedBaselinesCount in interface ExplanationMetadata.InputMetadataOrBuilder
      • getEncodedBaselines

        public com.google.protobuf.Value getEncodedBaselines​(int index)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
        Specified by:
        getEncodedBaselines in interface ExplanationMetadata.InputMetadataOrBuilder
      • setEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder setEncodedBaselines​(int index,
                                                                             com.google.protobuf.Value value)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • setEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder setEncodedBaselines​(int index,
                                                                             com.google.protobuf.Value.Builder builderForValue)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder addEncodedBaselines​(com.google.protobuf.Value value)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder addEncodedBaselines​(int index,
                                                                             com.google.protobuf.Value value)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder addEncodedBaselines​(com.google.protobuf.Value.Builder builderForValue)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder addEncodedBaselines​(int index,
                                                                             com.google.protobuf.Value.Builder builderForValue)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addAllEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder addAllEncodedBaselines​(Iterable<? extends com.google.protobuf.Value> values)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • clearEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder clearEncodedBaselines()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • removeEncodedBaselines

        public ExplanationMetadata.InputMetadata.Builder removeEncodedBaselines​(int index)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • getEncodedBaselinesBuilder

        public com.google.protobuf.Value.Builder getEncodedBaselinesBuilder​(int index)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • getEncodedBaselinesOrBuilder

        public com.google.protobuf.ValueOrBuilder getEncodedBaselinesOrBuilder​(int index)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
        Specified by:
        getEncodedBaselinesOrBuilder in interface ExplanationMetadata.InputMetadataOrBuilder
      • getEncodedBaselinesOrBuilderList

        public List<? extends com.google.protobuf.ValueOrBuilder> getEncodedBaselinesOrBuilderList()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
        Specified by:
        getEncodedBaselinesOrBuilderList in interface ExplanationMetadata.InputMetadataOrBuilder
      • addEncodedBaselinesBuilder

        public com.google.protobuf.Value.Builder addEncodedBaselinesBuilder()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • addEncodedBaselinesBuilder

        public com.google.protobuf.Value.Builder addEncodedBaselinesBuilder​(int index)
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • getEncodedBaselinesBuilderList

        public List<com.google.protobuf.Value.Builder> getEncodedBaselinesBuilderList()
         A list of baselines for the encoded tensor.
        
         The shape of each baseline should match the shape of the encoded tensor.
         If a scalar is provided, Vertex AI broadcasts to the same shape as the
         encoded tensor.
         
        repeated .google.protobuf.Value encoded_baselines = 10;
      • hasVisualization

        public boolean hasVisualization()
         Visualization configurations for image explanation.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.Visualization visualization = 11;
        Specified by:
        hasVisualization in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        Whether the visualization field is set.
      • clearVisualization

        public ExplanationMetadata.InputMetadata.Builder clearVisualization()
         Visualization configurations for image explanation.
         
        .google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.Visualization visualization = 11;
      • getGroupName

        public String getGroupName()
         Name of the group that the input belongs to. Features with the same group
         name will be treated as one feature when computing attributions. Features
         grouped together can have different shapes in value. If provided, there
         will be one single attribution generated in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions],
         keyed by the group name.
         
        string group_name = 12;
        Specified by:
        getGroupName in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The groupName.
      • getGroupNameBytes

        public com.google.protobuf.ByteString getGroupNameBytes()
         Name of the group that the input belongs to. Features with the same group
         name will be treated as one feature when computing attributions. Features
         grouped together can have different shapes in value. If provided, there
         will be one single attribution generated in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions],
         keyed by the group name.
         
        string group_name = 12;
        Specified by:
        getGroupNameBytes in interface ExplanationMetadata.InputMetadataOrBuilder
        Returns:
        The bytes for groupName.
      • setGroupName

        public ExplanationMetadata.InputMetadata.Builder setGroupName​(String value)
         Name of the group that the input belongs to. Features with the same group
         name will be treated as one feature when computing attributions. Features
         grouped together can have different shapes in value. If provided, there
         will be one single attribution generated in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions],
         keyed by the group name.
         
        string group_name = 12;
        Parameters:
        value - The groupName to set.
        Returns:
        This builder for chaining.
      • clearGroupName

        public ExplanationMetadata.InputMetadata.Builder clearGroupName()
         Name of the group that the input belongs to. Features with the same group
         name will be treated as one feature when computing attributions. Features
         grouped together can have different shapes in value. If provided, there
         will be one single attribution generated in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions],
         keyed by the group name.
         
        string group_name = 12;
        Returns:
        This builder for chaining.
      • setGroupNameBytes

        public ExplanationMetadata.InputMetadata.Builder setGroupNameBytes​(com.google.protobuf.ByteString value)
         Name of the group that the input belongs to. Features with the same group
         name will be treated as one feature when computing attributions. Features
         grouped together can have different shapes in value. If provided, there
         will be one single attribution generated in
         [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions],
         keyed by the group name.
         
        string group_name = 12;
        Parameters:
        value - The bytes for groupName to set.
        Returns:
        This builder for chaining.