Class SchemaAwareStreamWriter<T>
- java.lang.Object
-
- com.google.cloud.bigquery.storage.v1.SchemaAwareStreamWriter<T>
-
- All Implemented Interfaces:
AutoCloseable
public class SchemaAwareStreamWriter<T> extends Object implements AutoCloseable
A StreamWriter that can write data to BigQuery tables. The SchemaAwareStreamWriter is built on top of a StreamWriter, and it converts all data to Protobuf messages using provided converter then calls StreamWriter's append() method to write to BigQuery tables. It maintains all StreamWriter functions, but also provides an additional feature: schema update support, where if the BigQuery table schema is updated, users will be able to ingest data on the new schema after some time (in order of minutes).NOTE: The schema update ability will be disabled when you pass in a table schema explicitly through the writer. It is recommended that user either use JsonStreamWriter (which fully manages table schema) or StreamWriter (which accepts proto format in raw and user will handle the schema update event themsevles). If you use this class, you need to be very cautious about possible mistmach between the writer's schema and the input data, any mismatch of the two will cause data corruption.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
SchemaAwareStreamWriter.Builder<T>
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description com.google.api.core.ApiFuture<AppendRowsResponse>
append(Iterable<T> items)
Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at current end of stream.com.google.api.core.ApiFuture<AppendRowsResponse>
append(Iterable<T> items, long offset)
Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at the specified offset.void
close()
Closes the underlying StreamWriter.com.google.protobuf.Descriptors.Descriptor
getDescriptor()
Gets current descriptorlong
getInflightWaitSeconds()
Returns the wait of a request in Client side before sending to the Server.String
getLocation()
Gets the location of the destinationMap<String,AppendRowsRequest.MissingValueInterpretation>
getMissingValueInterpretationMap()
String
getStreamName()
String
getWriterId()
boolean
isClosed()
boolean
isUserClosed()
static <T> SchemaAwareStreamWriter.Builder<T>
newBuilder(String streamOrTableName, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with TableSchema being initialized by StreamWriter by default.static <T> SchemaAwareStreamWriter.Builder<T>
newBuilder(String streamOrTableName, TableSchema tableSchema, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder.static <T> SchemaAwareStreamWriter.Builder<T>
newBuilder(String streamOrTableName, TableSchema tableSchema, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with BigQuery client being initialized by StreamWriter by default.void
setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)
Sets the missing value interpretation map for the SchemaAwareStreamWriter.
-
-
-
Method Detail
-
append
public com.google.api.core.ApiFuture<AppendRowsResponse> append(Iterable<T> items) throws IOException, com.google.protobuf.Descriptors.DescriptorValidationException
Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at current end of stream. If there is a schema update, the current StreamWriter is closed. A new StreamWriter is created with the updated TableSchema.- Parameters:
items
- The array that contains objects to be written- Returns:
ApiFuture<AppendRowsResponse>
returns an AppendRowsResponse message wrapped in an ApiFuture- Throws:
IOException
com.google.protobuf.Descriptors.DescriptorValidationException
-
append
public com.google.api.core.ApiFuture<AppendRowsResponse> append(Iterable<T> items, long offset) throws IOException, com.google.protobuf.Descriptors.DescriptorValidationException
Writes a collection that contains objects to the BigQuery table by first converting the data to Protobuf messages, then using StreamWriter's append() to write the data at the specified offset. If there is a schema update, the current StreamWriter is closed. A new StreamWriter is created with the updated TableSchema.- Parameters:
items
- The collection that contains objects to be writtenoffset
- Offset for deduplication- Returns:
ApiFuture<AppendRowsResponse>
returns an AppendRowsResponse message wrapped in an ApiFuture- Throws:
IOException
com.google.protobuf.Descriptors.DescriptorValidationException
-
getStreamName
public String getStreamName()
- Returns:
- The name of the write stream associated with this writer.
-
getWriterId
public String getWriterId()
- Returns:
- A unique Id for this writer.
-
getDescriptor
public com.google.protobuf.Descriptors.Descriptor getDescriptor()
Gets current descriptor- Returns:
- Descriptor
-
getLocation
public String getLocation()
Gets the location of the destination- Returns:
- Descriptor
-
getInflightWaitSeconds
public long getInflightWaitSeconds()
Returns the wait of a request in Client side before sending to the Server. Request could wait in Client because it reached the client side inflight request limit (adjustable when constructing the Writer). The value is the wait time for the last sent request. A constant high wait value indicates a need for more throughput, you can create a new Stream for to increase the throughput in exclusive stream case, or create a new Writer in the default stream case.
-
setMissingValueInterpretationMap
public void setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)
Sets the missing value interpretation map for the SchemaAwareStreamWriter. The input missingValueInterpretationMap is used for all append requests unless otherwise changed.- Parameters:
missingValueInterpretationMap
- the missing value interpretation map used by the SchemaAwareStreamWriter.
-
getMissingValueInterpretationMap
public Map<String,AppendRowsRequest.MissingValueInterpretation> getMissingValueInterpretationMap()
- Returns:
- the missing value interpretation map used for the writer.
-
newBuilder
public static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, TableSchema tableSchema, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with BigQuery client being initialized by StreamWriter by default.The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.
- Parameters:
streamOrTableName
- name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+" or table name "projects/[^/]+/datasets/[^/]+/tables/[^/]+"tableSchema
- The schema of the table when the stream was created, which is passed back throughWriteStream
- Returns:
- Builder
-
newBuilder
public static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, TableSchema tableSchema, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder.The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.
- Parameters:
streamOrTableName
- name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"tableSchema
- The schema of the table when the stream was created, which is passed back throughWriteStream
client
-- Returns:
- Builder
-
newBuilder
public static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with TableSchema being initialized by StreamWriter by default.- Parameters:
streamOrTableName
- name of the stream that must follow "projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"client
- BigQueryWriteClient- Returns:
- Builder
-
close
public void close()
Closes the underlying StreamWriter.- Specified by:
close
in interfaceAutoCloseable
-
isClosed
public boolean isClosed()
- Returns:
- if a writer can no longer be used for writing. It is due to either the SchemaAwareStreamWriter is explicitly closed or the underlying connection is broken when connection pool is not used. Client should recreate SchemaAwareStreamWriter in this case.
-
isUserClosed
public boolean isUserClosed()
- Returns:
- if user explicitly closed the writer.
-
-