Package com.google.cloud.speech.v1
Class StreamingRecognitionConfig.Builder
- java.lang.Object
-
- com.google.protobuf.AbstractMessageLite.Builder
-
- com.google.protobuf.AbstractMessage.Builder<BuilderT>
-
- com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
- com.google.cloud.speech.v1.StreamingRecognitionConfig.Builder
-
- All Implemented Interfaces:
StreamingRecognitionConfigOrBuilder,com.google.protobuf.Message.Builder,com.google.protobuf.MessageLite.Builder,com.google.protobuf.MessageLiteOrBuilder,com.google.protobuf.MessageOrBuilder,Cloneable
- Enclosing class:
- StreamingRecognitionConfig
public static final class StreamingRecognitionConfig.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.
Protobuf typegoogle.cloud.speech.v1.StreamingRecognitionConfig
-
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description StreamingRecognitionConfig.BuilderaddRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)StreamingRecognitionConfigbuild()StreamingRecognitionConfigbuildPartial()StreamingRecognitionConfig.Builderclear()StreamingRecognitionConfig.BuilderclearConfig()Required.StreamingRecognitionConfig.BuilderclearEnableVoiceActivityEvents()If `true`, responses with voice activity speech events will be returned as they are detected.StreamingRecognitionConfig.BuilderclearField(com.google.protobuf.Descriptors.FieldDescriptor field)StreamingRecognitionConfig.BuilderclearInterimResults()If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).StreamingRecognitionConfig.BuilderclearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)StreamingRecognitionConfig.BuilderclearSingleUtterance()If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.BuilderclearVoiceActivityTimeout()If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.Builderclone()RecognitionConfiggetConfig()Required.RecognitionConfig.BuildergetConfigBuilder()Required.RecognitionConfigOrBuildergetConfigOrBuilder()Required.StreamingRecognitionConfiggetDefaultInstanceForType()static com.google.protobuf.Descriptors.DescriptorgetDescriptor()com.google.protobuf.Descriptors.DescriptorgetDescriptorForType()booleangetEnableVoiceActivityEvents()If `true`, responses with voice activity speech events will be returned as they are detected.booleangetInterimResults()If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).booleangetSingleUtterance()If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.VoiceActivityTimeoutgetVoiceActivityTimeout()If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.VoiceActivityTimeout.BuildergetVoiceActivityTimeoutBuilder()If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.VoiceActivityTimeoutOrBuildergetVoiceActivityTimeoutOrBuilder()If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.booleanhasConfig()Required.booleanhasVoiceActivityTimeout()If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTableinternalGetFieldAccessorTable()booleanisInitialized()StreamingRecognitionConfig.BuildermergeConfig(RecognitionConfig value)Required.StreamingRecognitionConfig.BuildermergeFrom(StreamingRecognitionConfig other)StreamingRecognitionConfig.BuildermergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)StreamingRecognitionConfig.BuildermergeFrom(com.google.protobuf.Message other)StreamingRecognitionConfig.BuildermergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)StreamingRecognitionConfig.BuildermergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.BuildersetConfig(RecognitionConfig value)Required.StreamingRecognitionConfig.BuildersetConfig(RecognitionConfig.Builder builderForValue)Required.StreamingRecognitionConfig.BuildersetEnableVoiceActivityEvents(boolean value)If `true`, responses with voice activity speech events will be returned as they are detected.StreamingRecognitionConfig.BuildersetField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)StreamingRecognitionConfig.BuildersetInterimResults(boolean value)If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag).StreamingRecognitionConfig.BuildersetRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)StreamingRecognitionConfig.BuildersetSingleUtterance(boolean value)If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached.StreamingRecognitionConfig.BuildersetUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)StreamingRecognitionConfig.BuildersetVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.StreamingRecognitionConfig.BuildersetVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent.-
Methods inherited from class com.google.protobuf.GeneratedMessageV3.Builder
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setUnknownFieldSetBuilder, setUnknownFieldsProto3
-
Methods inherited from class com.google.protobuf.AbstractMessage.Builder
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
-
Methods inherited from class com.google.protobuf.AbstractMessageLite.Builder
addAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, newUninitializedMessageException
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
-
-
-
Method Detail
-
getDescriptor
public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
-
internalGetFieldAccessorTable
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
- Specified by:
internalGetFieldAccessorTablein classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clear
public StreamingRecognitionConfig.Builder clear()
- Specified by:
clearin interfacecom.google.protobuf.Message.Builder- Specified by:
clearin interfacecom.google.protobuf.MessageLite.Builder- Overrides:
clearin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
getDescriptorForType
public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
- Specified by:
getDescriptorForTypein interfacecom.google.protobuf.Message.Builder- Specified by:
getDescriptorForTypein interfacecom.google.protobuf.MessageOrBuilder- Overrides:
getDescriptorForTypein classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
getDefaultInstanceForType
public StreamingRecognitionConfig getDefaultInstanceForType()
- Specified by:
getDefaultInstanceForTypein interfacecom.google.protobuf.MessageLiteOrBuilder- Specified by:
getDefaultInstanceForTypein interfacecom.google.protobuf.MessageOrBuilder
-
build
public StreamingRecognitionConfig build()
- Specified by:
buildin interfacecom.google.protobuf.Message.Builder- Specified by:
buildin interfacecom.google.protobuf.MessageLite.Builder
-
buildPartial
public StreamingRecognitionConfig buildPartial()
- Specified by:
buildPartialin interfacecom.google.protobuf.Message.Builder- Specified by:
buildPartialin interfacecom.google.protobuf.MessageLite.Builder
-
clone
public StreamingRecognitionConfig.Builder clone()
- Specified by:
clonein interfacecom.google.protobuf.Message.Builder- Specified by:
clonein interfacecom.google.protobuf.MessageLite.Builder- Overrides:
clonein classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
setField
public StreamingRecognitionConfig.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
setFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
setFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clearField
public StreamingRecognitionConfig.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
- Specified by:
clearFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
clearFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
clearOneof
public StreamingRecognitionConfig.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
- Specified by:
clearOneofin interfacecom.google.protobuf.Message.Builder- Overrides:
clearOneofin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
setRepeatedField
public StreamingRecognitionConfig.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
- Specified by:
setRepeatedFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
setRepeatedFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
addRepeatedField
public StreamingRecognitionConfig.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
- Specified by:
addRepeatedFieldin interfacecom.google.protobuf.Message.Builder- Overrides:
addRepeatedFieldin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(com.google.protobuf.Message other)
- Specified by:
mergeFromin interfacecom.google.protobuf.Message.Builder- Overrides:
mergeFromin classcom.google.protobuf.AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
-
isInitialized
public final boolean isInitialized()
- Specified by:
isInitializedin interfacecom.google.protobuf.MessageLiteOrBuilder- Overrides:
isInitializedin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeFrom
public StreamingRecognitionConfig.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
- Specified by:
mergeFromin interfacecom.google.protobuf.Message.Builder- Specified by:
mergeFromin interfacecom.google.protobuf.MessageLite.Builder- Overrides:
mergeFromin classcom.google.protobuf.AbstractMessage.Builder<StreamingRecognitionConfig.Builder>- Throws:
IOException
-
hasConfig
public boolean hasConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];- Specified by:
hasConfigin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- Whether the config field is set.
-
getConfig
public RecognitionConfig getConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];- Specified by:
getConfigin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- The config.
-
setConfig
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
setConfig
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
mergeConfig
public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
clearConfig
public StreamingRecognitionConfig.Builder clearConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
getConfigBuilder
public RecognitionConfig.Builder getConfigBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
-
getConfigOrBuilder
public RecognitionConfigOrBuilder getConfigOrBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];- Specified by:
getConfigOrBuilderin interfaceStreamingRecognitionConfigOrBuilder
-
getSingleUtterance
public boolean getSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;- Specified by:
getSingleUtterancein interfaceStreamingRecognitionConfigOrBuilder- Returns:
- The singleUtterance.
-
setSingleUtterance
public StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;- Parameters:
value- The singleUtterance to set.- Returns:
- This builder for chaining.
-
clearSingleUtterance
public StreamingRecognitionConfig.Builder clearSingleUtterance()
If `false` or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingRecognitionResult`s with the `is_final` flag set to `true`. If `true`, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no more than one `StreamingRecognitionResult` with the `is_final` flag set to `true`. The `single_utterance` field can only be used with specified models, otherwise an error is thrown. The `model` field in [`RecognitionConfig`][] must be set to: * `command_and_search` * `phone_call` AND additional field `useEnhanced`=`true` * The `model` field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in `RecognitionConfig`.
bool single_utterance = 2;- Returns:
- This builder for chaining.
-
getInterimResults
public boolean getInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;- Specified by:
getInterimResultsin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- The interimResults.
-
setInterimResults
public StreamingRecognitionConfig.Builder setInterimResults(boolean value)
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;- Parameters:
value- The interimResults to set.- Returns:
- This builder for chaining.
-
clearInterimResults
public StreamingRecognitionConfig.Builder clearInterimResults()
If `true`, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the `is_final=false` flag). If `false` or omitted, only `is_final=true` result(s) are returned.
bool interim_results = 3;- Returns:
- This builder for chaining.
-
getEnableVoiceActivityEvents
public boolean getEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;- Specified by:
getEnableVoiceActivityEventsin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- The enableVoiceActivityEvents.
-
setEnableVoiceActivityEvents
public StreamingRecognitionConfig.Builder setEnableVoiceActivityEvents(boolean value)
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;- Parameters:
value- The enableVoiceActivityEvents to set.- Returns:
- This builder for chaining.
-
clearEnableVoiceActivityEvents
public StreamingRecognitionConfig.Builder clearEnableVoiceActivityEvents()
If `true`, responses with voice activity speech events will be returned as they are detected.
bool enable_voice_activity_events = 5;- Returns:
- This builder for chaining.
-
hasVoiceActivityTimeout
public boolean hasVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;- Specified by:
hasVoiceActivityTimeoutin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- Whether the voiceActivityTimeout field is set.
-
getVoiceActivityTimeout
public StreamingRecognitionConfig.VoiceActivityTimeout getVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;- Specified by:
getVoiceActivityTimeoutin interfaceStreamingRecognitionConfigOrBuilder- Returns:
- The voiceActivityTimeout.
-
setVoiceActivityTimeout
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
setVoiceActivityTimeout
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
mergeVoiceActivityTimeout
public StreamingRecognitionConfig.Builder mergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
clearVoiceActivityTimeout
public StreamingRecognitionConfig.Builder clearVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
getVoiceActivityTimeoutBuilder
public StreamingRecognitionConfig.VoiceActivityTimeout.Builder getVoiceActivityTimeoutBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
-
getVoiceActivityTimeoutOrBuilder
public StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder getVoiceActivityTimeoutOrBuilder()
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field `voice_activity_events` must also be set to true.
.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;- Specified by:
getVoiceActivityTimeoutOrBuilderin interfaceStreamingRecognitionConfigOrBuilder
-
setUnknownFields
public final StreamingRecognitionConfig.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
setUnknownFieldsin interfacecom.google.protobuf.Message.Builder- Overrides:
setUnknownFieldsin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
mergeUnknownFields
public final StreamingRecognitionConfig.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
- Specified by:
mergeUnknownFieldsin interfacecom.google.protobuf.Message.Builder- Overrides:
mergeUnknownFieldsin classcom.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
-
-