Package com.google.cloud.asset.v1p7beta1
Class BigQueryDestination.Builder
- java.lang.Object
-
- com.google.protobuf.AbstractMessageLite.Builder
-
- com.google.protobuf.AbstractMessage.Builder<BuilderT>
-
- com.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
- com.google.cloud.asset.v1p7beta1.BigQueryDestination.Builder
-
- All Implemented Interfaces:
BigQueryDestinationOrBuilder
,com.google.protobuf.Message.Builder
,com.google.protobuf.MessageLite.Builder
,com.google.protobuf.MessageLiteOrBuilder
,com.google.protobuf.MessageOrBuilder
,Cloneable
- Enclosing class:
- BigQueryDestination
public static final class BigQueryDestination.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder> implements BigQueryDestinationOrBuilder
A BigQuery destination for exporting assets to.
Protobuf typegoogle.cloud.asset.v1p7beta1.BigQueryDestination
-
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description BigQueryDestination.Builder
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
BigQueryDestination
build()
BigQueryDestination
buildPartial()
BigQueryDestination.Builder
clear()
BigQueryDestination.Builder
clearDataset()
Required.BigQueryDestination.Builder
clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
BigQueryDestination.Builder
clearForce()
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot.BigQueryDestination.Builder
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
BigQueryDestination.Builder
clearPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.BigQueryDestination.Builder
clearSeparateTablesPerAssetType()
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type.BigQueryDestination.Builder
clearTable()
Required.BigQueryDestination.Builder
clone()
String
getDataset()
Required.com.google.protobuf.ByteString
getDatasetBytes()
Required.BigQueryDestination
getDefaultInstanceForType()
static com.google.protobuf.Descriptors.Descriptor
getDescriptor()
com.google.protobuf.Descriptors.Descriptor
getDescriptorForType()
boolean
getForce()
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot.PartitionSpec
getPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.PartitionSpec.Builder
getPartitionSpecBuilder()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.PartitionSpecOrBuilder
getPartitionSpecOrBuilder()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.boolean
getSeparateTablesPerAssetType()
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type.String
getTable()
Required.com.google.protobuf.ByteString
getTableBytes()
Required.boolean
hasPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internalGetFieldAccessorTable()
boolean
isInitialized()
BigQueryDestination.Builder
mergeFrom(BigQueryDestination other)
BigQueryDestination.Builder
mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)
BigQueryDestination.Builder
mergeFrom(com.google.protobuf.Message other)
BigQueryDestination.Builder
mergePartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.BigQueryDestination.Builder
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
BigQueryDestination.Builder
setDataset(String value)
Required.BigQueryDestination.Builder
setDatasetBytes(com.google.protobuf.ByteString value)
Required.BigQueryDestination.Builder
setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
BigQueryDestination.Builder
setForce(boolean value)
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot.BigQueryDestination.Builder
setPartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.BigQueryDestination.Builder
setPartitionSpec(PartitionSpec.Builder builderForValue)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.BigQueryDestination.Builder
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
BigQueryDestination.Builder
setSeparateTablesPerAssetType(boolean value)
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type.BigQueryDestination.Builder
setTable(String value)
Required.BigQueryDestination.Builder
setTableBytes(com.google.protobuf.ByteString value)
Required.BigQueryDestination.Builder
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
-
Methods inherited from class com.google.protobuf.GeneratedMessageV3.Builder
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setUnknownFieldSetBuilder, setUnknownFieldsProto3
-
Methods inherited from class com.google.protobuf.AbstractMessage.Builder
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
-
Methods inherited from class com.google.protobuf.AbstractMessageLite.Builder
addAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, newUninitializedMessageException
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
-
-
-
Method Detail
-
getDescriptor
public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
-
internalGetFieldAccessorTable
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
- Specified by:
internalGetFieldAccessorTable
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
clear
public BigQueryDestination.Builder clear()
- Specified by:
clear
in interfacecom.google.protobuf.Message.Builder
- Specified by:
clear
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
clear
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
getDescriptorForType
public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.Message.Builder
- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.MessageOrBuilder
- Overrides:
getDescriptorForType
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
getDefaultInstanceForType
public BigQueryDestination getDefaultInstanceForType()
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageOrBuilder
-
build
public BigQueryDestination build()
- Specified by:
build
in interfacecom.google.protobuf.Message.Builder
- Specified by:
build
in interfacecom.google.protobuf.MessageLite.Builder
-
buildPartial
public BigQueryDestination buildPartial()
- Specified by:
buildPartial
in interfacecom.google.protobuf.Message.Builder
- Specified by:
buildPartial
in interfacecom.google.protobuf.MessageLite.Builder
-
clone
public BigQueryDestination.Builder clone()
- Specified by:
clone
in interfacecom.google.protobuf.Message.Builder
- Specified by:
clone
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
clone
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
setField
public BigQueryDestination.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
setField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setField
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
clearField
public BigQueryDestination.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
- Specified by:
clearField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
clearField
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
clearOneof
public BigQueryDestination.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
- Specified by:
clearOneof
in interfacecom.google.protobuf.Message.Builder
- Overrides:
clearOneof
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
setRepeatedField
public BigQueryDestination.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
- Specified by:
setRepeatedField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setRepeatedField
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
addRepeatedField
public BigQueryDestination.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
addRepeatedField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
addRepeatedField
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
mergeFrom
public BigQueryDestination.Builder mergeFrom(com.google.protobuf.Message other)
- Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<BigQueryDestination.Builder>
-
mergeFrom
public BigQueryDestination.Builder mergeFrom(BigQueryDestination other)
-
isInitialized
public final boolean isInitialized()
- Specified by:
isInitialized
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Overrides:
isInitialized
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
mergeFrom
public BigQueryDestination.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
- Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Specified by:
mergeFrom
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<BigQueryDestination.Builder>
- Throws:
IOException
-
getDataset
public String getDataset()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getDataset
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The dataset.
-
getDatasetBytes
public com.google.protobuf.ByteString getDatasetBytes()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getDatasetBytes
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The bytes for dataset.
-
setDataset
public BigQueryDestination.Builder setDataset(String value)
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
- Parameters:
value
- The dataset to set.- Returns:
- This builder for chaining.
-
clearDataset
public BigQueryDestination.Builder clearDataset()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
- Returns:
- This builder for chaining.
-
setDatasetBytes
public BigQueryDestination.Builder setDatasetBytes(com.google.protobuf.ByteString value)
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
- Parameters:
value
- The bytes for dataset to set.- Returns:
- This builder for chaining.
-
getTable
public String getTable()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getTable
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The table.
-
getTableBytes
public com.google.protobuf.ByteString getTableBytes()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getTableBytes
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The bytes for table.
-
setTable
public BigQueryDestination.Builder setTable(String value)
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
- Parameters:
value
- The table to set.- Returns:
- This builder for chaining.
-
clearTable
public BigQueryDestination.Builder clearTable()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
- Returns:
- This builder for chaining.
-
setTableBytes
public BigQueryDestination.Builder setTableBytes(com.google.protobuf.ByteString value)
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
- Parameters:
value
- The bytes for table to set.- Returns:
- This builder for chaining.
-
getForce
public boolean getForce()
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot. If the flag is `FALSE` or unset and the destination table already exists, the export call returns an INVALID_ARGUMEMT error.
bool force = 3;
- Specified by:
getForce
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The force.
-
setForce
public BigQueryDestination.Builder setForce(boolean value)
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot. If the flag is `FALSE` or unset and the destination table already exists, the export call returns an INVALID_ARGUMEMT error.
bool force = 3;
- Parameters:
value
- The force to set.- Returns:
- This builder for chaining.
-
clearForce
public BigQueryDestination.Builder clearForce()
If the destination table already exists and this flag is `TRUE`, the table will be overwritten by the contents of assets snapshot. If the flag is `FALSE` or unset and the destination table already exists, the export call returns an INVALID_ARGUMEMT error.
bool force = 3;
- Returns:
- This builder for chaining.
-
hasPartitionSpec
public boolean hasPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
- Specified by:
hasPartitionSpec
in interfaceBigQueryDestinationOrBuilder
- Returns:
- Whether the partitionSpec field is set.
-
getPartitionSpec
public PartitionSpec getPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
- Specified by:
getPartitionSpec
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The partitionSpec.
-
setPartitionSpec
public BigQueryDestination.Builder setPartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
-
setPartitionSpec
public BigQueryDestination.Builder setPartitionSpec(PartitionSpec.Builder builderForValue)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
-
mergePartitionSpec
public BigQueryDestination.Builder mergePartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
-
clearPartitionSpec
public BigQueryDestination.Builder clearPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
-
getPartitionSpecBuilder
public PartitionSpec.Builder getPartitionSpecBuilder()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
-
getPartitionSpecOrBuilder
public PartitionSpecOrBuilder getPartitionSpecOrBuilder()
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data. If [partition_spec] is unset or [partition_spec.partition_key] is unset or `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to non-partitioned table(s). [force] will decide whether to overwrite existing table(s). If [partition_spec] is specified. First, the snapshot results will be written to partitioned table(s) with two additional timestamp columns, readTime and requestTime, one of which will be the partition key. Secondly, in the case when any destination table already exists, it will first try to update existing table's schema as necessary by appending additional columns. Then, if [force] is `TRUE`, the corresponding partition will be overwritten by the snapshot results (data in different partitions will remain intact); if [force] is unset or `FALSE`, it will append the data. An error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
- Specified by:
getPartitionSpecOrBuilder
in interfaceBigQueryDestinationOrBuilder
-
getSeparateTablesPerAssetType
public boolean getSeparateTablesPerAssetType()
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type. The [force] and [partition_spec] fields will apply to each of them. Field [table] will be concatenated with "_" and the asset type names (see https://cloud.google.com/asset-inventory/docs/supported-asset-types for supported asset types) to construct per-asset-type table names, in which all non-alphanumeric characters like "." and "/" will be substituted by "_". Example: if field [table] is "mytable" and snapshot results contain "storage.googleapis.com/Bucket" assets, the corresponding table name will be "mytable_storage_googleapis_com_Bucket". If any of these tables does not exist, a new table with the concatenated name will be created. When [content_type] in the ExportAssetsRequest is `RESOURCE`, the schema of each table will include RECORD-type columns mapped to the nested fields in the Asset.resource.data field of that asset type (up to the 15 nested level BigQuery supports (https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The fields in >15 nested levels will be stored in JSON format string as a child column of its parent RECORD column. If error occurs when exporting to any table, the whole export call will return an error but the export results that already succeed will persist. Example: if exporting to table_type_A succeeds when exporting to table_type_B fails during one export call, the results in table_type_A will persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
- Specified by:
getSeparateTablesPerAssetType
in interfaceBigQueryDestinationOrBuilder
- Returns:
- The separateTablesPerAssetType.
-
setSeparateTablesPerAssetType
public BigQueryDestination.Builder setSeparateTablesPerAssetType(boolean value)
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type. The [force] and [partition_spec] fields will apply to each of them. Field [table] will be concatenated with "_" and the asset type names (see https://cloud.google.com/asset-inventory/docs/supported-asset-types for supported asset types) to construct per-asset-type table names, in which all non-alphanumeric characters like "." and "/" will be substituted by "_". Example: if field [table] is "mytable" and snapshot results contain "storage.googleapis.com/Bucket" assets, the corresponding table name will be "mytable_storage_googleapis_com_Bucket". If any of these tables does not exist, a new table with the concatenated name will be created. When [content_type] in the ExportAssetsRequest is `RESOURCE`, the schema of each table will include RECORD-type columns mapped to the nested fields in the Asset.resource.data field of that asset type (up to the 15 nested level BigQuery supports (https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The fields in >15 nested levels will be stored in JSON format string as a child column of its parent RECORD column. If error occurs when exporting to any table, the whole export call will return an error but the export results that already succeed will persist. Example: if exporting to table_type_A succeeds when exporting to table_type_B fails during one export call, the results in table_type_A will persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
- Parameters:
value
- The separateTablesPerAssetType to set.- Returns:
- This builder for chaining.
-
clearSeparateTablesPerAssetType
public BigQueryDestination.Builder clearSeparateTablesPerAssetType()
If this flag is `TRUE`, the snapshot results will be written to one or multiple tables, each of which contains results of one asset type. The [force] and [partition_spec] fields will apply to each of them. Field [table] will be concatenated with "_" and the asset type names (see https://cloud.google.com/asset-inventory/docs/supported-asset-types for supported asset types) to construct per-asset-type table names, in which all non-alphanumeric characters like "." and "/" will be substituted by "_". Example: if field [table] is "mytable" and snapshot results contain "storage.googleapis.com/Bucket" assets, the corresponding table name will be "mytable_storage_googleapis_com_Bucket". If any of these tables does not exist, a new table with the concatenated name will be created. When [content_type] in the ExportAssetsRequest is `RESOURCE`, the schema of each table will include RECORD-type columns mapped to the nested fields in the Asset.resource.data field of that asset type (up to the 15 nested level BigQuery supports (https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The fields in >15 nested levels will be stored in JSON format string as a child column of its parent RECORD column. If error occurs when exporting to any table, the whole export call will return an error but the export results that already succeed will persist. Example: if exporting to table_type_A succeeds when exporting to table_type_B fails during one export call, the results in table_type_A will persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
- Returns:
- This builder for chaining.
-
setUnknownFields
public final BigQueryDestination.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
setUnknownFields
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setUnknownFields
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
mergeUnknownFields
public final BigQueryDestination.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
mergeUnknownFields
in interfacecom.google.protobuf.Message.Builder
- Overrides:
mergeUnknownFields
in classcom.google.protobuf.GeneratedMessageV3.Builder<BigQueryDestination.Builder>
-
-