Class StreamingRecognitionConfig.Builder

    • Method Detail

      • getDescriptor

        public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
      • getDescriptorForType

        public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface com.google.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface com.google.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
      • getDefaultInstanceForType

        public StreamingRecognitionConfig getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilder
      • build

        public StreamingRecognitionConfig build()
        Specified by:
        build in interface com.google.protobuf.Message.Builder
        Specified by:
        build in interface com.google.protobuf.MessageLite.Builder
      • buildPartial

        public StreamingRecognitionConfig buildPartial()
        Specified by:
        buildPartial in interface com.google.protobuf.Message.Builder
        Specified by:
        buildPartial in interface com.google.protobuf.MessageLite.Builder
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface com.google.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
      • mergeFrom

        public StreamingRecognitionConfig.Builder mergeFrom​(com.google.protobuf.CodedInputStream input,
                                                            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
                                                     throws IOException
        Specified by:
        mergeFrom in interface com.google.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface com.google.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class com.google.protobuf.AbstractMessage.Builder<StreamingRecognitionConfig.Builder>
        Throws:
        IOException
      • hasConfig

        public boolean hasConfig()
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
        Specified by:
        hasConfig in interface StreamingRecognitionConfigOrBuilder
        Returns:
        Whether the config field is set.
      • getConfig

        public RecognitionConfig getConfig()
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
        Specified by:
        getConfig in interface StreamingRecognitionConfigOrBuilder
        Returns:
        The config.
      • setConfig

        public StreamingRecognitionConfig.Builder setConfig​(RecognitionConfig value)
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
      • setConfig

        public StreamingRecognitionConfig.Builder setConfig​(RecognitionConfig.Builder builderForValue)
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
      • mergeConfig

        public StreamingRecognitionConfig.Builder mergeConfig​(RecognitionConfig value)
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
      • clearConfig

        public StreamingRecognitionConfig.Builder clearConfig()
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
      • getConfigBuilder

        public RecognitionConfig.Builder getConfigBuilder()
         Required. Provides information to the recognizer that specifies how to
         process the request.
         
        .google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
      • getSingleUtterance

        public boolean getSingleUtterance()
         If `false` or omitted, the recognizer will perform continuous
         recognition (continuing to wait for and process audio even if the user
         pauses speaking) until the client closes the input stream (gRPC API) or
         until the maximum time limit has been reached. May return multiple
         `StreamingRecognitionResult`s with the `is_final` flag set to `true`.
        
         If `true`, the recognizer will detect a single spoken utterance. When it
         detects that the user has paused or stopped speaking, it will return an
         `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no
         more than one `StreamingRecognitionResult` with the `is_final` flag set to
         `true`.
        
         The `single_utterance` field can only be used with specified models,
         otherwise an error is thrown. The `model` field in [`RecognitionConfig`][]
         must be set to:
        
         * `command_and_search`
         * `phone_call` AND additional field `useEnhanced`=`true`
         * The `model` field is left undefined. In this case the API auto-selects
           a model based on any other parameters that you set in
           `RecognitionConfig`.
         
        bool single_utterance = 2;
        Specified by:
        getSingleUtterance in interface StreamingRecognitionConfigOrBuilder
        Returns:
        The singleUtterance.
      • setSingleUtterance

        public StreamingRecognitionConfig.Builder setSingleUtterance​(boolean value)
         If `false` or omitted, the recognizer will perform continuous
         recognition (continuing to wait for and process audio even if the user
         pauses speaking) until the client closes the input stream (gRPC API) or
         until the maximum time limit has been reached. May return multiple
         `StreamingRecognitionResult`s with the `is_final` flag set to `true`.
        
         If `true`, the recognizer will detect a single spoken utterance. When it
         detects that the user has paused or stopped speaking, it will return an
         `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no
         more than one `StreamingRecognitionResult` with the `is_final` flag set to
         `true`.
        
         The `single_utterance` field can only be used with specified models,
         otherwise an error is thrown. The `model` field in [`RecognitionConfig`][]
         must be set to:
        
         * `command_and_search`
         * `phone_call` AND additional field `useEnhanced`=`true`
         * The `model` field is left undefined. In this case the API auto-selects
           a model based on any other parameters that you set in
           `RecognitionConfig`.
         
        bool single_utterance = 2;
        Parameters:
        value - The singleUtterance to set.
        Returns:
        This builder for chaining.
      • clearSingleUtterance

        public StreamingRecognitionConfig.Builder clearSingleUtterance()
         If `false` or omitted, the recognizer will perform continuous
         recognition (continuing to wait for and process audio even if the user
         pauses speaking) until the client closes the input stream (gRPC API) or
         until the maximum time limit has been reached. May return multiple
         `StreamingRecognitionResult`s with the `is_final` flag set to `true`.
        
         If `true`, the recognizer will detect a single spoken utterance. When it
         detects that the user has paused or stopped speaking, it will return an
         `END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no
         more than one `StreamingRecognitionResult` with the `is_final` flag set to
         `true`.
        
         The `single_utterance` field can only be used with specified models,
         otherwise an error is thrown. The `model` field in [`RecognitionConfig`][]
         must be set to:
        
         * `command_and_search`
         * `phone_call` AND additional field `useEnhanced`=`true`
         * The `model` field is left undefined. In this case the API auto-selects
           a model based on any other parameters that you set in
           `RecognitionConfig`.
         
        bool single_utterance = 2;
        Returns:
        This builder for chaining.
      • getInterimResults

        public boolean getInterimResults()
         If `true`, interim results (tentative hypotheses) may be
         returned as they become available (these interim results are indicated with
         the `is_final=false` flag).
         If `false` or omitted, only `is_final=true` result(s) are returned.
         
        bool interim_results = 3;
        Specified by:
        getInterimResults in interface StreamingRecognitionConfigOrBuilder
        Returns:
        The interimResults.
      • setInterimResults

        public StreamingRecognitionConfig.Builder setInterimResults​(boolean value)
         If `true`, interim results (tentative hypotheses) may be
         returned as they become available (these interim results are indicated with
         the `is_final=false` flag).
         If `false` or omitted, only `is_final=true` result(s) are returned.
         
        bool interim_results = 3;
        Parameters:
        value - The interimResults to set.
        Returns:
        This builder for chaining.
      • clearInterimResults

        public StreamingRecognitionConfig.Builder clearInterimResults()
         If `true`, interim results (tentative hypotheses) may be
         returned as they become available (these interim results are indicated with
         the `is_final=false` flag).
         If `false` or omitted, only `is_final=true` result(s) are returned.
         
        bool interim_results = 3;
        Returns:
        This builder for chaining.
      • getEnableVoiceActivityEvents

        public boolean getEnableVoiceActivityEvents()
         If `true`, responses with voice activity speech events will be returned as
         they are detected.
         
        bool enable_voice_activity_events = 5;
        Specified by:
        getEnableVoiceActivityEvents in interface StreamingRecognitionConfigOrBuilder
        Returns:
        The enableVoiceActivityEvents.
      • setEnableVoiceActivityEvents

        public StreamingRecognitionConfig.Builder setEnableVoiceActivityEvents​(boolean value)
         If `true`, responses with voice activity speech events will be returned as
         they are detected.
         
        bool enable_voice_activity_events = 5;
        Parameters:
        value - The enableVoiceActivityEvents to set.
        Returns:
        This builder for chaining.
      • clearEnableVoiceActivityEvents

        public StreamingRecognitionConfig.Builder clearEnableVoiceActivityEvents()
         If `true`, responses with voice activity speech events will be returned as
         they are detected.
         
        bool enable_voice_activity_events = 5;
        Returns:
        This builder for chaining.
      • hasVoiceActivityTimeout

        public boolean hasVoiceActivityTimeout()
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
        Specified by:
        hasVoiceActivityTimeout in interface StreamingRecognitionConfigOrBuilder
        Returns:
        Whether the voiceActivityTimeout field is set.
      • getVoiceActivityTimeout

        public StreamingRecognitionConfig.VoiceActivityTimeout getVoiceActivityTimeout()
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
        Specified by:
        getVoiceActivityTimeout in interface StreamingRecognitionConfigOrBuilder
        Returns:
        The voiceActivityTimeout.
      • setVoiceActivityTimeout

        public StreamingRecognitionConfig.Builder setVoiceActivityTimeout​(StreamingRecognitionConfig.VoiceActivityTimeout value)
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
      • setVoiceActivityTimeout

        public StreamingRecognitionConfig.Builder setVoiceActivityTimeout​(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
      • mergeVoiceActivityTimeout

        public StreamingRecognitionConfig.Builder mergeVoiceActivityTimeout​(StreamingRecognitionConfig.VoiceActivityTimeout value)
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
      • clearVoiceActivityTimeout

        public StreamingRecognitionConfig.Builder clearVoiceActivityTimeout()
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
      • getVoiceActivityTimeoutBuilder

        public StreamingRecognitionConfig.VoiceActivityTimeout.Builder getVoiceActivityTimeoutBuilder()
         If set, the server will automatically close the stream after the specified
         duration has elapsed after the last VOICE_ACTIVITY speech event has been
         sent. The field `voice_activity_events` must also be set to true.
         
        .google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
      • setUnknownFields

        public final StreamingRecognitionConfig.Builder setUnknownFields​(com.google.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        setUnknownFields in interface com.google.protobuf.Message.Builder
        Overrides:
        setUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>
      • mergeUnknownFields

        public final StreamingRecognitionConfig.Builder mergeUnknownFields​(com.google.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        mergeUnknownFields in interface com.google.protobuf.Message.Builder
        Overrides:
        mergeUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder>