Class StreamingRecognitionConfig.Builder
- java.lang.Object
-
- com.google.protobuf.AbstractMessageLite.Builder
-
- com.google.protobuf.AbstractMessage.Builder<BuilderT>
-
- com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
- com.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.Builder
-
- All Implemented Interfaces:
StreamingRecognitionConfigOrBuilder
,com.google.protobuf.Message.Builder
,com.google.protobuf.MessageLite.Builder
,com.google.protobuf.MessageLiteOrBuilder
,com.google.protobuf.MessageOrBuilder
,Cloneable
- Enclosing class:
- StreamingRecognitionConfig
public static final class StreamingRecognitionConfig.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.
Protobuf typegoogle.cloud.speech.v1p1beta1.StreamingRecognitionConfig
-
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description StreamingRecognitionConfig.Builder
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
StreamingRecognitionConfig
build()
StreamingRecognitionConfig
buildPartial()
StreamingRecognitionConfig.Builder
clear()
StreamingRecognitionConfig.Builder
clearConfig()
Required.StreamingRecognitionConfig.Builder
clearEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.StreamingRecognitionConfig.Builder
clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
StreamingRecognitionConfig.Builder
clearInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).StreamingRecognitionConfig.Builder
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
StreamingRecognitionConfig.Builder
clearSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.Builder
clearVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.Builder
clone()
RecognitionConfig
getConfig()
Required.RecognitionConfig.Builder
getConfigBuilder()
Required.RecognitionConfigOrBuilder
getConfigOrBuilder()
Required.StreamingRecognitionConfig
getDefaultInstanceForType()
static com.google.protobuf.Descriptors.Descriptor
getDescriptor()
com.google.protobuf.Descriptors.Descriptor
getDescriptorForType()
boolean
getEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.boolean
getInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).boolean
getSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.VoiceActivityTimeout
getVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.VoiceActivityTimeout.Builder
getVoiceActivityTimeoutBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder
getVoiceActivityTimeoutOrBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.boolean
hasConfig()
Required.boolean
hasVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internalGetFieldAccessorTable()
boolean
isInitialized()
StreamingRecognitionConfig.Builder
mergeConfig(RecognitionConfig value)
Required.StreamingRecognitionConfig.Builder
mergeFrom(StreamingRecognitionConfig other)
StreamingRecognitionConfig.Builder
mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)
StreamingRecognitionConfig.Builder
mergeFrom(com.google.protobuf.Message other)
StreamingRecognitionConfig.Builder
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
StreamingRecognitionConfig.Builder
mergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.Builder
setConfig(RecognitionConfig value)
Required.StreamingRecognitionConfig.Builder
setConfig(RecognitionConfig.Builder builderForValue)
Required.StreamingRecognitionConfig.Builder
setEnableVoiceActivityEvents(boolean value)
If `true`, responses with voice activity speech events will be returned as they are detected.StreamingRecognitionConfig.Builder
setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
StreamingRecognitionConfig.Builder
setInterimResults(boolean value)
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).StreamingRecognitionConfig.Builder
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
StreamingRecognitionConfig.Builder
setSingleUtterance(boolean value)
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.Builder
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
StreamingRecognitionConfig.Builder
setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.Builder
setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.-
Methods inherited from class com.google.protobuf.GeneratedMessageV3.Builder
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setUnknownFieldSetBuilder, setUnknownFieldsProto3
-
Methods inherited from class com.google.protobuf.AbstractMessage.Builder
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
-
Methods inherited from class com.google.protobuf.AbstractMessageLite.Builder
addAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, newUninitializedMessageException
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
-
-
-
Method Detail
-
getDescriptor
public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
-
internalGetFieldAccessorTable
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
- Specified by:
internalGetFieldAccessorTable
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clear
public StreamingRecognitionConfig.Builder clear()
- Specified by:
clear
in interfacecom.google.protobuf.Message.Builder
- Specified by:
clear
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
clear
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
getDescriptorForType
public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.Message.Builder
- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.MessageOrBuilder
- Overrides:
getDescriptorForType
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
getDefaultInstanceForType
public StreamingRecognitionConfig getDefaultInstanceForType()
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageOrBuilder
-
build
public StreamingRecognitionConfig build()
- Specified by:
build
in interfacecom.google.protobuf.Message.Builder
- Specified by:
build
in interfacecom.google.protobuf.MessageLite.Builder
-
buildPartial
public StreamingRecognitionConfig buildPartial()
- Specified by:
buildPartial
in interfacecom.google.protobuf.Message.Builder
- Specified by:
buildPartial
in interfacecom.google.protobuf.MessageLite.Builder
-
clone
public StreamingRecognitionConfig.Builder clone()
- Specified by:
clone
in interfacecom.google.protobuf.Message.Builder
- Specified by:
clone
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
clone
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
setField
public StreamingRecognitionConfig.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
setField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setField
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clearField
public StreamingRecognitionConfig.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
- Specified by:
clearField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
clearField
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clearOneof
public StreamingRecognitionConfig.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
- Specified by:
clearOneof
in interfacecom.google.protobuf.Message.Builder
- Overrides:
clearOneof
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
setRepeatedField
public StreamingRecognitionConfig.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
- Specified by:
setRepeatedField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setRepeatedField
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
addRepeatedField
public StreamingRecognitionConfig.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
addRepeatedField
in interfacecom.google.protobuf.Message.Builder
- Overrides:
addRepeatedField
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(com.google.protobuf.Message other)
- Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
-
isInitialized
public final boolean isInitialized()
- Specified by:
isInitialized
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Overrides:
isInitialized
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
- Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Specified by:
mergeFrom
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
- Throws:
IOException
-
hasConfig
public boolean hasConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
hasConfig
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- Whether the config field is set.
-
getConfig
public RecognitionConfig getConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getConfig
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- The config.
-
setConfig
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
setConfig
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
mergeConfig
public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
clearConfig
public StreamingRecognitionConfig.Builder clearConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
getConfigBuilder
public RecognitionConfig.Builder getConfigBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
getConfigOrBuilder
public RecognitionConfigOrBuilder getConfigOrBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
- Specified by:
getConfigOrBuilder
in interfaceStreamingRecognitionConfigOrBuilder
-
getSingleUtterance
public boolean getSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;
- Specified by:
getSingleUtterance
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- The singleUtterance.
-
setSingleUtterance
public StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;
- Parameters:
value
- The singleUtterance to set.- Returns:
- This builder for chaining.
-
clearSingleUtterance
public StreamingRecognitionConfig.Builder clearSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;
- Returns:
- This builder for chaining.
-
getInterimResults
public boolean getInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
- Specified by:
getInterimResults
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- The interimResults.
-
setInterimResults
public StreamingRecognitionConfig.Builder setInterimResults(boolean value)
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
- Parameters:
value
- The interimResults to set.- Returns:
- This builder for chaining.
-
clearInterimResults
public StreamingRecognitionConfig.Builder clearInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;
- Returns:
- This builder for chaining.
-
getEnableVoiceActivityEvents
public boolean getEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;
- Specified by:
getEnableVoiceActivityEvents
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- The enableVoiceActivityEvents.
-
setEnableVoiceActivityEvents
public StreamingRecognitionConfig.Builder setEnableVoiceActivityEvents(boolean value)
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;
- Parameters:
value
- The enableVoiceActivityEvents to set.- Returns:
- This builder for chaining.
-
clearEnableVoiceActivityEvents
public StreamingRecognitionConfig.Builder clearEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;
- Returns:
- This builder for chaining.
-
hasVoiceActivityTimeout
public boolean hasVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
- Specified by:
hasVoiceActivityTimeout
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- Whether the voiceActivityTimeout field is set.
-
getVoiceActivityTimeout
public StreamingRecognitionConfig.VoiceActivityTimeout getVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
- Specified by:
getVoiceActivityTimeout
in interfaceStreamingRecognitionConfigOrBuilder
- Returns:
- The voiceActivityTimeout.
-
setVoiceActivityTimeout
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
setVoiceActivityTimeout
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
mergeVoiceActivityTimeout
public StreamingRecognitionConfig.Builder mergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
clearVoiceActivityTimeout
public StreamingRecognitionConfig.Builder clearVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
getVoiceActivityTimeoutBuilder
public StreamingRecognitionConfig.VoiceActivityTimeout.Builder getVoiceActivityTimeoutBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
getVoiceActivityTimeoutOrBuilder
public StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder getVoiceActivityTimeoutOrBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
- Specified by:
getVoiceActivityTimeoutOrBuilder
in interfaceStreamingRecognitionConfigOrBuilder
-
setUnknownFields
public final StreamingRecognitionConfig.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
setUnknownFields
in interfacecom.google.protobuf.Message.Builder
- Overrides:
setUnknownFields
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeUnknownFields
public final StreamingRecognitionConfig.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
mergeUnknownFields
in interfacecom.google.protobuf.Message.Builder
- Overrides:
mergeUnknownFields
in classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
-