Class InputDataConfig.Builder
- java.lang.Object
-
- com.google.protobuf.AbstractMessageLite.Builder
-
- com.google.protobuf.AbstractMessage.Builder<BuilderT>
-
- com.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
- com.google.cloud.aiplatform.v1beta1.InputDataConfig.Builder
-
- All Implemented Interfaces:
InputDataConfigOrBuilder,com.google.protobuf.Message.Builder,com.google.protobuf.MessageLite.Builder,com.google.protobuf.MessageLiteOrBuilder,com.google.protobuf.MessageOrBuilder,Cloneable
- Enclosing class:
- InputDataConfig
public static final class InputDataConfig.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder> implements InputDataConfigOrBuilder
Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.
Protobuf typegoogle.cloud.aiplatform.v1beta1.InputDataConfig
-
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description InputDataConfig.BuilderaddRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)InputDataConfigbuild()InputDataConfigbuildPartial()InputDataConfig.Builderclear()InputDataConfig.BuilderclearAnnotationSchemaUri()Applicable only to custom training with Datasets that have DataItems and Annotations.InputDataConfig.BuilderclearAnnotationsFilter()Applicable only to Datasets that have DataItems and Annotations.InputDataConfig.BuilderclearBigqueryDestination()Only applicable to custom training with tabular Dataset with BigQuery source.InputDataConfig.BuilderclearDatasetId()Required.InputDataConfig.BuilderclearDestination()InputDataConfig.BuilderclearField(com.google.protobuf.Descriptors.FieldDescriptor field)InputDataConfig.BuilderclearFilterSplit()Split based on the provided filters for each set.InputDataConfig.BuilderclearFractionSplit()Split based on fractions defining the size of each set.InputDataConfig.BuilderclearGcsDestination()The Cloud Storage location where the training data is to be written to.InputDataConfig.BuilderclearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)InputDataConfig.BuilderclearPersistMlUseAssignment()Whether to persist the ML use assignment to data item system labels.InputDataConfig.BuilderclearPredefinedSplit()Supported only for tabular Datasets.InputDataConfig.BuilderclearSavedQueryId()Only applicable to Datasets that have SavedQueries.InputDataConfig.BuilderclearSplit()InputDataConfig.BuilderclearStratifiedSplit()Supported only for tabular Datasets.InputDataConfig.BuilderclearTimestampSplit()Supported only for tabular Datasets.InputDataConfig.Builderclone()StringgetAnnotationSchemaUri()Applicable only to custom training with Datasets that have DataItems and Annotations.com.google.protobuf.ByteStringgetAnnotationSchemaUriBytes()Applicable only to custom training with Datasets that have DataItems and Annotations.StringgetAnnotationsFilter()Applicable only to Datasets that have DataItems and Annotations.com.google.protobuf.ByteStringgetAnnotationsFilterBytes()Applicable only to Datasets that have DataItems and Annotations.BigQueryDestinationgetBigqueryDestination()Only applicable to custom training with tabular Dataset with BigQuery source.BigQueryDestination.BuildergetBigqueryDestinationBuilder()Only applicable to custom training with tabular Dataset with BigQuery source.BigQueryDestinationOrBuildergetBigqueryDestinationOrBuilder()Only applicable to custom training with tabular Dataset with BigQuery source.StringgetDatasetId()Required.com.google.protobuf.ByteStringgetDatasetIdBytes()Required.InputDataConfiggetDefaultInstanceForType()static com.google.protobuf.Descriptors.DescriptorgetDescriptor()com.google.protobuf.Descriptors.DescriptorgetDescriptorForType()InputDataConfig.DestinationCasegetDestinationCase()FilterSplitgetFilterSplit()Split based on the provided filters for each set.FilterSplit.BuildergetFilterSplitBuilder()Split based on the provided filters for each set.FilterSplitOrBuildergetFilterSplitOrBuilder()Split based on the provided filters for each set.FractionSplitgetFractionSplit()Split based on fractions defining the size of each set.FractionSplit.BuildergetFractionSplitBuilder()Split based on fractions defining the size of each set.FractionSplitOrBuildergetFractionSplitOrBuilder()Split based on fractions defining the size of each set.GcsDestinationgetGcsDestination()The Cloud Storage location where the training data is to be written to.GcsDestination.BuildergetGcsDestinationBuilder()The Cloud Storage location where the training data is to be written to.GcsDestinationOrBuildergetGcsDestinationOrBuilder()The Cloud Storage location where the training data is to be written to.booleangetPersistMlUseAssignment()Whether to persist the ML use assignment to data item system labels.PredefinedSplitgetPredefinedSplit()Supported only for tabular Datasets.PredefinedSplit.BuildergetPredefinedSplitBuilder()Supported only for tabular Datasets.PredefinedSplitOrBuildergetPredefinedSplitOrBuilder()Supported only for tabular Datasets.StringgetSavedQueryId()Only applicable to Datasets that have SavedQueries.com.google.protobuf.ByteStringgetSavedQueryIdBytes()Only applicable to Datasets that have SavedQueries.InputDataConfig.SplitCasegetSplitCase()StratifiedSplitgetStratifiedSplit()Supported only for tabular Datasets.StratifiedSplit.BuildergetStratifiedSplitBuilder()Supported only for tabular Datasets.StratifiedSplitOrBuildergetStratifiedSplitOrBuilder()Supported only for tabular Datasets.TimestampSplitgetTimestampSplit()Supported only for tabular Datasets.TimestampSplit.BuildergetTimestampSplitBuilder()Supported only for tabular Datasets.TimestampSplitOrBuildergetTimestampSplitOrBuilder()Supported only for tabular Datasets.booleanhasBigqueryDestination()Only applicable to custom training with tabular Dataset with BigQuery source.booleanhasFilterSplit()Split based on the provided filters for each set.booleanhasFractionSplit()Split based on fractions defining the size of each set.booleanhasGcsDestination()The Cloud Storage location where the training data is to be written to.booleanhasPredefinedSplit()Supported only for tabular Datasets.booleanhasStratifiedSplit()Supported only for tabular Datasets.booleanhasTimestampSplit()Supported only for tabular Datasets.protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTableinternalGetFieldAccessorTable()booleanisInitialized()InputDataConfig.BuildermergeBigqueryDestination(BigQueryDestination value)Only applicable to custom training with tabular Dataset with BigQuery source.InputDataConfig.BuildermergeFilterSplit(FilterSplit value)Split based on the provided filters for each set.InputDataConfig.BuildermergeFractionSplit(FractionSplit value)Split based on fractions defining the size of each set.InputDataConfig.BuildermergeFrom(InputDataConfig other)InputDataConfig.BuildermergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)InputDataConfig.BuildermergeFrom(com.google.protobuf.Message other)InputDataConfig.BuildermergeGcsDestination(GcsDestination value)The Cloud Storage location where the training data is to be written to.InputDataConfig.BuildermergePredefinedSplit(PredefinedSplit value)Supported only for tabular Datasets.InputDataConfig.BuildermergeStratifiedSplit(StratifiedSplit value)Supported only for tabular Datasets.InputDataConfig.BuildermergeTimestampSplit(TimestampSplit value)Supported only for tabular Datasets.InputDataConfig.BuildermergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)InputDataConfig.BuildersetAnnotationSchemaUri(String value)Applicable only to custom training with Datasets that have DataItems and Annotations.InputDataConfig.BuildersetAnnotationSchemaUriBytes(com.google.protobuf.ByteString value)Applicable only to custom training with Datasets that have DataItems and Annotations.InputDataConfig.BuildersetAnnotationsFilter(String value)Applicable only to Datasets that have DataItems and Annotations.InputDataConfig.BuildersetAnnotationsFilterBytes(com.google.protobuf.ByteString value)Applicable only to Datasets that have DataItems and Annotations.InputDataConfig.BuildersetBigqueryDestination(BigQueryDestination value)Only applicable to custom training with tabular Dataset with BigQuery source.InputDataConfig.BuildersetBigqueryDestination(BigQueryDestination.Builder builderForValue)Only applicable to custom training with tabular Dataset with BigQuery source.InputDataConfig.BuildersetDatasetId(String value)Required.InputDataConfig.BuildersetDatasetIdBytes(com.google.protobuf.ByteString value)Required.InputDataConfig.BuildersetField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)InputDataConfig.BuildersetFilterSplit(FilterSplit value)Split based on the provided filters for each set.InputDataConfig.BuildersetFilterSplit(FilterSplit.Builder builderForValue)Split based on the provided filters for each set.InputDataConfig.BuildersetFractionSplit(FractionSplit value)Split based on fractions defining the size of each set.InputDataConfig.BuildersetFractionSplit(FractionSplit.Builder builderForValue)Split based on fractions defining the size of each set.InputDataConfig.BuildersetGcsDestination(GcsDestination value)The Cloud Storage location where the training data is to be written to.InputDataConfig.BuildersetGcsDestination(GcsDestination.Builder builderForValue)The Cloud Storage location where the training data is to be written to.InputDataConfig.BuildersetPersistMlUseAssignment(boolean value)Whether to persist the ML use assignment to data item system labels.InputDataConfig.BuildersetPredefinedSplit(PredefinedSplit value)Supported only for tabular Datasets.InputDataConfig.BuildersetPredefinedSplit(PredefinedSplit.Builder builderForValue)Supported only for tabular Datasets.InputDataConfig.BuildersetRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)InputDataConfig.BuildersetSavedQueryId(String value)Only applicable to Datasets that have SavedQueries.InputDataConfig.BuildersetSavedQueryIdBytes(com.google.protobuf.ByteString value)Only applicable to Datasets that have SavedQueries.InputDataConfig.BuildersetStratifiedSplit(StratifiedSplit value)Supported only for tabular Datasets.InputDataConfig.BuildersetStratifiedSplit(StratifiedSplit.Builder builderForValue)Supported only for tabular Datasets.InputDataConfig.BuildersetTimestampSplit(TimestampSplit value)Supported only for tabular Datasets.InputDataConfig.BuildersetTimestampSplit(TimestampSplit.Builder builderForValue)Supported only for tabular Datasets.InputDataConfig.BuildersetUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)-
Methods inherited from class com.google.protobuf.GeneratedMessageV3.Builder
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setUnknownFieldSetBuilder, setUnknownFieldsProto3
-
Methods inherited from class com.google.protobuf.AbstractMessage.Builder
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
-
Methods inherited from class com.google.protobuf.AbstractMessageLite.Builder
addAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, newUninitializedMessageException
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
-
-
-
Method Detail
-
getDescriptor
public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
-
internalGetFieldAccessorTable
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
- Specified by:
internalGetFieldAccessorTablein classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
clear
public InputDataConfig.Builder clear()
- Specified by:
clearin interfacecom.google.protobuf.Message.Builder- Specified by:
clearin interfacecom.google.protobuf.MessageLite.Builder- Overrides:
clearin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
getDescriptorForType
public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
- Specified by:
getDescriptorForTypein interfacecom.google.protobuf.Message.Builder- Specified by:
getDescriptorForTypein interfacecom.google.protobuf.MessageOrBuilder- Overrides:
getDescriptorForTypein classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
getDefaultInstanceForType
public InputDataConfig getDefaultInstanceForType()
- Specified by:
getDefaultInstanceForTypein interfacecom.google.protobuf.MessageLiteOrBuilder- Specified by:
getDefaultInstanceForTypein interfacecom.google.protobuf.MessageOrBuilder
-
build
public InputDataConfig build()
- Specified by:
buildin interfacecom.google.protobuf.Message.Builder- Specified by:
buildin interfacecom.google.protobuf.MessageLite.Builder
-
buildPartial
public InputDataConfig buildPartial()
- Specified by:
buildPartialin interfacecom.google.protobuf.Message.Builder- Specified by:
buildPartialin interfacecom.google.protobuf.MessageLite.Builder
-
clone
public InputDataConfig.Builder clone()
- Specified by:
clonein interfacecom.google.protobuf.Message.Builder- Specified by:
clonein interfacecom.google.protobuf.MessageLite.Builder- Overrides:
clonein classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
setField
public InputDataConfig.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
setFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
setFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
clearField
public InputDataConfig.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
- Specified by:
clearFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
clearFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
clearOneof
public InputDataConfig.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
- Specified by:
clearOneofin interfacecom.google.protobuf.Message.Builder- Overrides:
clearOneofin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
setRepeatedField
public InputDataConfig.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
- Specified by:
setRepeatedFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
setRepeatedFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
addRepeatedField
public InputDataConfig.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
addRepeatedFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
addRepeatedFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
mergeFrom
public InputDataConfig.Builder mergeFrom(com.google.protobuf.Message other)
- Specified by:
mergeFromin interfacecom.google.protobuf.Message.Builder- Overrides:
mergeFromin classcom.google.protobuf.AbstractMessage.Builder<InputDataConfig.Builder>
-
mergeFrom
public InputDataConfig.Builder mergeFrom(InputDataConfig other)
-
isInitialized
public final boolean isInitialized()
- Specified by:
isInitializedin interfacecom.google.protobuf.MessageLiteOrBuilder- Overrides:
isInitializedin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
mergeFrom
public InputDataConfig.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
- Specified by:
mergeFromin interfacecom.google.protobuf.Message.Builder- Specified by:
mergeFromin interfacecom.google.protobuf.MessageLite.Builder- Overrides:
mergeFromin classcom.google.protobuf.AbstractMessage.Builder<InputDataConfig.Builder>- Throws:
IOException
-
getSplitCase
public InputDataConfig.SplitCase getSplitCase()
- Specified by:
getSplitCasein interfaceInputDataConfigOrBuilder
-
clearSplit
public InputDataConfig.Builder clearSplit()
-
getDestinationCase
public InputDataConfig.DestinationCase getDestinationCase()
- Specified by:
getDestinationCasein interfaceInputDataConfigOrBuilder
-
clearDestination
public InputDataConfig.Builder clearDestination()
-
hasFractionSplit
public boolean hasFractionSplit()
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;- Specified by:
hasFractionSplitin interfaceInputDataConfigOrBuilder- Returns:
- Whether the fractionSplit field is set.
-
getFractionSplit
public FractionSplit getFractionSplit()
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;- Specified by:
getFractionSplitin interfaceInputDataConfigOrBuilder- Returns:
- The fractionSplit.
-
setFractionSplit
public InputDataConfig.Builder setFractionSplit(FractionSplit value)
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;
-
setFractionSplit
public InputDataConfig.Builder setFractionSplit(FractionSplit.Builder builderForValue)
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;
-
mergeFractionSplit
public InputDataConfig.Builder mergeFractionSplit(FractionSplit value)
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;
-
clearFractionSplit
public InputDataConfig.Builder clearFractionSplit()
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;
-
getFractionSplitBuilder
public FractionSplit.Builder getFractionSplitBuilder()
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;
-
getFractionSplitOrBuilder
public FractionSplitOrBuilder getFractionSplitOrBuilder()
Split based on fractions defining the size of each set.
.google.cloud.aiplatform.v1beta1.FractionSplit fraction_split = 2;- Specified by:
getFractionSplitOrBuilderin interfaceInputDataConfigOrBuilder
-
hasFilterSplit
public boolean hasFilterSplit()
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;- Specified by:
hasFilterSplitin interfaceInputDataConfigOrBuilder- Returns:
- Whether the filterSplit field is set.
-
getFilterSplit
public FilterSplit getFilterSplit()
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;- Specified by:
getFilterSplitin interfaceInputDataConfigOrBuilder- Returns:
- The filterSplit.
-
setFilterSplit
public InputDataConfig.Builder setFilterSplit(FilterSplit value)
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;
-
setFilterSplit
public InputDataConfig.Builder setFilterSplit(FilterSplit.Builder builderForValue)
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;
-
mergeFilterSplit
public InputDataConfig.Builder mergeFilterSplit(FilterSplit value)
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;
-
clearFilterSplit
public InputDataConfig.Builder clearFilterSplit()
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;
-
getFilterSplitBuilder
public FilterSplit.Builder getFilterSplitBuilder()
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;
-
getFilterSplitOrBuilder
public FilterSplitOrBuilder getFilterSplitOrBuilder()
Split based on the provided filters for each set.
.google.cloud.aiplatform.v1beta1.FilterSplit filter_split = 3;- Specified by:
getFilterSplitOrBuilderin interfaceInputDataConfigOrBuilder
-
hasPredefinedSplit
public boolean hasPredefinedSplit()
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;- Specified by:
hasPredefinedSplitin interfaceInputDataConfigOrBuilder- Returns:
- Whether the predefinedSplit field is set.
-
getPredefinedSplit
public PredefinedSplit getPredefinedSplit()
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;- Specified by:
getPredefinedSplitin interfaceInputDataConfigOrBuilder- Returns:
- The predefinedSplit.
-
setPredefinedSplit
public InputDataConfig.Builder setPredefinedSplit(PredefinedSplit value)
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;
-
setPredefinedSplit
public InputDataConfig.Builder setPredefinedSplit(PredefinedSplit.Builder builderForValue)
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;
-
mergePredefinedSplit
public InputDataConfig.Builder mergePredefinedSplit(PredefinedSplit value)
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;
-
clearPredefinedSplit
public InputDataConfig.Builder clearPredefinedSplit()
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;
-
getPredefinedSplitBuilder
public PredefinedSplit.Builder getPredefinedSplitBuilder()
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;
-
getPredefinedSplitOrBuilder
public PredefinedSplitOrBuilder getPredefinedSplitOrBuilder()
Supported only for tabular Datasets. Split based on a predefined key.
.google.cloud.aiplatform.v1beta1.PredefinedSplit predefined_split = 4;- Specified by:
getPredefinedSplitOrBuilderin interfaceInputDataConfigOrBuilder
-
hasTimestampSplit
public boolean hasTimestampSplit()
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;- Specified by:
hasTimestampSplitin interfaceInputDataConfigOrBuilder- Returns:
- Whether the timestampSplit field is set.
-
getTimestampSplit
public TimestampSplit getTimestampSplit()
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;- Specified by:
getTimestampSplitin interfaceInputDataConfigOrBuilder- Returns:
- The timestampSplit.
-
setTimestampSplit
public InputDataConfig.Builder setTimestampSplit(TimestampSplit value)
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;
-
setTimestampSplit
public InputDataConfig.Builder setTimestampSplit(TimestampSplit.Builder builderForValue)
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;
-
mergeTimestampSplit
public InputDataConfig.Builder mergeTimestampSplit(TimestampSplit value)
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;
-
clearTimestampSplit
public InputDataConfig.Builder clearTimestampSplit()
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;
-
getTimestampSplitBuilder
public TimestampSplit.Builder getTimestampSplitBuilder()
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;
-
getTimestampSplitOrBuilder
public TimestampSplitOrBuilder getTimestampSplitOrBuilder()
Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.
.google.cloud.aiplatform.v1beta1.TimestampSplit timestamp_split = 5;- Specified by:
getTimestampSplitOrBuilderin interfaceInputDataConfigOrBuilder
-
hasStratifiedSplit
public boolean hasStratifiedSplit()
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;- Specified by:
hasStratifiedSplitin interfaceInputDataConfigOrBuilder- Returns:
- Whether the stratifiedSplit field is set.
-
getStratifiedSplit
public StratifiedSplit getStratifiedSplit()
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;- Specified by:
getStratifiedSplitin interfaceInputDataConfigOrBuilder- Returns:
- The stratifiedSplit.
-
setStratifiedSplit
public InputDataConfig.Builder setStratifiedSplit(StratifiedSplit value)
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;
-
setStratifiedSplit
public InputDataConfig.Builder setStratifiedSplit(StratifiedSplit.Builder builderForValue)
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;
-
mergeStratifiedSplit
public InputDataConfig.Builder mergeStratifiedSplit(StratifiedSplit value)
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;
-
clearStratifiedSplit
public InputDataConfig.Builder clearStratifiedSplit()
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;
-
getStratifiedSplitBuilder
public StratifiedSplit.Builder getStratifiedSplitBuilder()
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;
-
getStratifiedSplitOrBuilder
public StratifiedSplitOrBuilder getStratifiedSplitOrBuilder()
Supported only for tabular Datasets. Split based on the distribution of the specified column.
.google.cloud.aiplatform.v1beta1.StratifiedSplit stratified_split = 12;- Specified by:
getStratifiedSplitOrBuilderin interfaceInputDataConfigOrBuilder
-
hasGcsDestination
public boolean hasGcsDestination()
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;- Specified by:
hasGcsDestinationin interfaceInputDataConfigOrBuilder- Returns:
- Whether the gcsDestination field is set.
-
getGcsDestination
public GcsDestination getGcsDestination()
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;- Specified by:
getGcsDestinationin interfaceInputDataConfigOrBuilder- Returns:
- The gcsDestination.
-
setGcsDestination
public InputDataConfig.Builder setGcsDestination(GcsDestination value)
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;
-
setGcsDestination
public InputDataConfig.Builder setGcsDestination(GcsDestination.Builder builderForValue)
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;
-
mergeGcsDestination
public InputDataConfig.Builder mergeGcsDestination(GcsDestination value)
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;
-
clearGcsDestination
public InputDataConfig.Builder clearGcsDestination()
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;
-
getGcsDestinationBuilder
public GcsDestination.Builder getGcsDestinationBuilder()
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;
-
getGcsDestinationOrBuilder
public GcsDestinationOrBuilder getGcsDestinationOrBuilder()
The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>` where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/validation-*.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/test-*.${AIP_DATA_FORMAT}".google.cloud.aiplatform.v1beta1.GcsDestination gcs_destination = 8;- Specified by:
getGcsDestinationOrBuilderin interfaceInputDataConfigOrBuilder
-
hasBigqueryDestination
public boolean hasBigqueryDestination()
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;- Specified by:
hasBigqueryDestinationin interfaceInputDataConfigOrBuilder- Returns:
- Whether the bigqueryDestination field is set.
-
getBigqueryDestination
public BigQueryDestination getBigqueryDestination()
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;- Specified by:
getBigqueryDestinationin interfaceInputDataConfigOrBuilder- Returns:
- The bigqueryDestination.
-
setBigqueryDestination
public InputDataConfig.Builder setBigqueryDestination(BigQueryDestination value)
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;
-
setBigqueryDestination
public InputDataConfig.Builder setBigqueryDestination(BigQueryDestination.Builder builderForValue)
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;
-
mergeBigqueryDestination
public InputDataConfig.Builder mergeBigqueryDestination(BigQueryDestination value)
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;
-
clearBigqueryDestination
public InputDataConfig.Builder clearBigqueryDestination()
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;
-
getBigqueryDestinationBuilder
public BigQueryDestination.Builder getBigqueryDestinationBuilder()
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;
-
getBigqueryDestinationOrBuilder
public BigQueryDestinationOrBuilder getBigqueryDestinationOrBuilder()
Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name `dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call>` where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, `training`, `validation` and `test`. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.training" * AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset_<dataset-id>_<annotation-type>_<time>.test"
.google.cloud.aiplatform.v1beta1.BigQueryDestination bigquery_destination = 10;- Specified by:
getBigqueryDestinationOrBuilderin interfaceInputDataConfigOrBuilder
-
getDatasetId
public String getDatasetId()
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];- Specified by:
getDatasetIdin interfaceInputDataConfigOrBuilder- Returns:
- The datasetId.
-
getDatasetIdBytes
public com.google.protobuf.ByteString getDatasetIdBytes()
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];- Specified by:
getDatasetIdBytesin interfaceInputDataConfigOrBuilder- Returns:
- The bytes for datasetId.
-
setDatasetId
public InputDataConfig.Builder setDatasetId(String value)
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];- Parameters:
value- The datasetId to set.- Returns:
- This builder for chaining.
-
clearDatasetId
public InputDataConfig.Builder clearDatasetId()
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];- Returns:
- This builder for chaining.
-
setDatasetIdBytes
public InputDataConfig.Builder setDatasetIdBytes(com.google.protobuf.ByteString value)
Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For tabular Datasets, all their data is exported to training, to pick and choose from.
string dataset_id = 1 [(.google.api.field_behavior) = REQUIRED];- Parameters:
value- The bytes for datasetId to set.- Returns:
- This builder for chaining.
-
getAnnotationsFilter
public String getAnnotationsFilter()
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in [ListAnnotations][google.cloud.aiplatform.v1beta1.DatasetService.ListAnnotations] may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
string annotations_filter = 6;- Specified by:
getAnnotationsFilterin interfaceInputDataConfigOrBuilder- Returns:
- The annotationsFilter.
-
getAnnotationsFilterBytes
public com.google.protobuf.ByteString getAnnotationsFilterBytes()
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in [ListAnnotations][google.cloud.aiplatform.v1beta1.DatasetService.ListAnnotations] may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
string annotations_filter = 6;- Specified by:
getAnnotationsFilterBytesin interfaceInputDataConfigOrBuilder- Returns:
- The bytes for annotationsFilter.
-
setAnnotationsFilter
public InputDataConfig.Builder setAnnotationsFilter(String value)
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in [ListAnnotations][google.cloud.aiplatform.v1beta1.DatasetService.ListAnnotations] may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
string annotations_filter = 6;- Parameters:
value- The annotationsFilter to set.- Returns:
- This builder for chaining.
-
clearAnnotationsFilter
public InputDataConfig.Builder clearAnnotationsFilter()
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in [ListAnnotations][google.cloud.aiplatform.v1beta1.DatasetService.ListAnnotations] may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
string annotations_filter = 6;- Returns:
- This builder for chaining.
-
setAnnotationsFilterBytes
public InputDataConfig.Builder setAnnotationsFilterBytes(com.google.protobuf.ByteString value)
Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in [ListAnnotations][google.cloud.aiplatform.v1beta1.DatasetService.ListAnnotations] may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.
string annotations_filter = 6;- Parameters:
value- The bytes for annotationsFilter to set.- Returns:
- This builder for chaining.
-
getAnnotationSchemaUri
public String getAnnotationSchemaUri()
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with [metadata][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] of the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id]. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri].
string annotation_schema_uri = 9;- Specified by:
getAnnotationSchemaUriin interfaceInputDataConfigOrBuilder- Returns:
- The annotationSchemaUri.
-
getAnnotationSchemaUriBytes
public com.google.protobuf.ByteString getAnnotationSchemaUriBytes()
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with [metadata][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] of the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id]. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri].
string annotation_schema_uri = 9;- Specified by:
getAnnotationSchemaUriBytesin interfaceInputDataConfigOrBuilder- Returns:
- The bytes for annotationSchemaUri.
-
setAnnotationSchemaUri
public InputDataConfig.Builder setAnnotationSchemaUri(String value)
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with [metadata][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] of the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id]. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri].
string annotation_schema_uri = 9;- Parameters:
value- The annotationSchemaUri to set.- Returns:
- This builder for chaining.
-
clearAnnotationSchemaUri
public InputDataConfig.Builder clearAnnotationSchemaUri()
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with [metadata][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] of the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id]. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri].
string annotation_schema_uri = 9;- Returns:
- This builder for chaining.
-
setAnnotationSchemaUriBytes
public InputDataConfig.Builder setAnnotationSchemaUriBytes(com.google.protobuf.ByteString value)
Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject). The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with [metadata][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] of the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id]. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri].
string annotation_schema_uri = 9;- Parameters:
value- The bytes for annotationSchemaUri to set.- Returns:
- This builder for chaining.
-
getSavedQueryId
public String getSavedQueryId()
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id] used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter]. Only one of [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri] should be specified as both of them represent the same thing: problem type.
string saved_query_id = 7;- Specified by:
getSavedQueryIdin interfaceInputDataConfigOrBuilder- Returns:
- The savedQueryId.
-
getSavedQueryIdBytes
public com.google.protobuf.ByteString getSavedQueryIdBytes()
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id] used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter]. Only one of [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri] should be specified as both of them represent the same thing: problem type.
string saved_query_id = 7;- Specified by:
getSavedQueryIdBytesin interfaceInputDataConfigOrBuilder- Returns:
- The bytes for savedQueryId.
-
setSavedQueryId
public InputDataConfig.Builder setSavedQueryId(String value)
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id] used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter]. Only one of [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri] should be specified as both of them represent the same thing: problem type.
string saved_query_id = 7;- Parameters:
value- The savedQueryId to set.- Returns:
- This builder for chaining.
-
clearSavedQueryId
public InputDataConfig.Builder clearSavedQueryId()
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id] used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter]. Only one of [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri] should be specified as both of them represent the same thing: problem type.
string saved_query_id = 7;- Returns:
- This builder for chaining.
-
setSavedQueryIdBytes
public InputDataConfig.Builder setSavedQueryIdBytes(com.google.protobuf.ByteString value)
Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by [dataset_id][google.cloud.aiplatform.v1beta1.InputDataConfig.dataset_id] used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter], the Annotations used for training are filtered by both [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotations_filter][google.cloud.aiplatform.v1beta1.InputDataConfig.annotations_filter]. Only one of [saved_query_id][google.cloud.aiplatform.v1beta1.InputDataConfig.saved_query_id] and [annotation_schema_uri][google.cloud.aiplatform.v1beta1.InputDataConfig.annotation_schema_uri] should be specified as both of them represent the same thing: problem type.
string saved_query_id = 7;- Parameters:
value- The bytes for savedQueryId to set.- Returns:
- This builder for chaining.
-
getPersistMlUseAssignment
public boolean getPersistMlUseAssignment()
Whether to persist the ML use assignment to data item system labels.
bool persist_ml_use_assignment = 11;- Specified by:
getPersistMlUseAssignmentin interfaceInputDataConfigOrBuilder- Returns:
- The persistMlUseAssignment.
-
setPersistMlUseAssignment
public InputDataConfig.Builder setPersistMlUseAssignment(boolean value)
Whether to persist the ML use assignment to data item system labels.
bool persist_ml_use_assignment = 11;- Parameters:
value- The persistMlUseAssignment to set.- Returns:
- This builder for chaining.
-
clearPersistMlUseAssignment
public InputDataConfig.Builder clearPersistMlUseAssignment()
Whether to persist the ML use assignment to data item system labels.
bool persist_ml_use_assignment = 11;- Returns:
- This builder for chaining.
-
setUnknownFields
public final InputDataConfig.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
setUnknownFieldsin interfacecom.google.protobuf.Message.Builder- Overrides:
setUnknownFieldsin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
mergeUnknownFields
public final InputDataConfig.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
mergeUnknownFieldsin interfacecom.google.protobuf.Message.Builder- Overrides:
mergeUnknownFieldsin classcom.google.protobuf.GeneratedMessageV3.Builder<InputDataConfig.Builder>
-
-