Class StreamingDetectIntentRequest.Builder

  • All Implemented Interfaces:
    StreamingDetectIntentRequestOrBuilder, com.google.protobuf.Message.Builder, com.google.protobuf.MessageLite.Builder, com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder, Cloneable
    Enclosing class:
    StreamingDetectIntentRequest

    public static final class StreamingDetectIntentRequest.Builder
    extends com.google.protobuf.GeneratedMessageV3.Builder<StreamingDetectIntentRequest.Builder>
    implements StreamingDetectIntentRequestOrBuilder
     The top-level message sent by the client to the
     [Sessions.StreamingDetectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent]
     method.
    
     Multiple request messages should be sent in order:
    
     1.  The first message must contain
     [session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session],
         [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input]
         plus optionally
         [query_params][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_params].
         If the client wants to receive an audio response, it should also contain
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config].
         The message must not contain
         [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].
     2.  If
     [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input]
     was set to
         [query_input.audio_config][google.cloud.dialogflow.v2.InputAudioConfig],
         all subsequent messages must contain
         [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio]
         to continue with Speech recognition. If you decide to rather detect an
         intent from text input after you already started Speech recognition,
         please send a message with
         [query_input.text][google.cloud.dialogflow.v2.QueryInput.text].
    
         However, note that:
    
         * Dialogflow will bill you for the audio duration so far.
         * Dialogflow discards all Speech recognition results in favor of the
           input text.
         * Dialogflow will use the language code from the first message.
    
     After you sent all input, you must half-close or abort the request stream.
     
    Protobuf type google.cloud.dialogflow.v2.StreamingDetectIntentRequest
    • Method Detail

      • getDescriptor

        public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingDetectIntentRequest.Builder>
      • getDescriptorForType

        public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface com.google.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface com.google.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingDetectIntentRequest.Builder>
      • getDefaultInstanceForType

        public StreamingDetectIntentRequest getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilder
      • build

        public StreamingDetectIntentRequest build()
        Specified by:
        build in interface com.google.protobuf.Message.Builder
        Specified by:
        build in interface com.google.protobuf.MessageLite.Builder
      • buildPartial

        public StreamingDetectIntentRequest buildPartial()
        Specified by:
        buildPartial in interface com.google.protobuf.Message.Builder
        Specified by:
        buildPartial in interface com.google.protobuf.MessageLite.Builder
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface com.google.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingDetectIntentRequest.Builder>
      • mergeFrom

        public StreamingDetectIntentRequest.Builder mergeFrom​(com.google.protobuf.CodedInputStream input,
                                                              com.google.protobuf.ExtensionRegistryLite extensionRegistry)
                                                       throws IOException
        Specified by:
        mergeFrom in interface com.google.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface com.google.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class com.google.protobuf.AbstractMessage.Builder<StreamingDetectIntentRequest.Builder>
        Throws:
        IOException
      • getSession

        public String getSession()
         Required. The name of the session the query is sent to.
         Format of the session name:
         `projects/<Project ID>/agent/sessions/<Session ID>`, or
         `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
         ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
         default 'draft' environment. If `User ID` is not specified, we are using
         "-". It's up to the API caller to choose an appropriate `Session ID` and
         `User Id`. They can be a random number or some type of user and session
         identifiers (preferably hashed). The length of the `Session ID` and
         `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Specified by:
        getSession in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The session.
      • getSessionBytes

        public com.google.protobuf.ByteString getSessionBytes()
         Required. The name of the session the query is sent to.
         Format of the session name:
         `projects/<Project ID>/agent/sessions/<Session ID>`, or
         `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
         ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
         default 'draft' environment. If `User ID` is not specified, we are using
         "-". It's up to the API caller to choose an appropriate `Session ID` and
         `User Id`. They can be a random number or some type of user and session
         identifiers (preferably hashed). The length of the `Session ID` and
         `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Specified by:
        getSessionBytes in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The bytes for session.
      • setSession

        public StreamingDetectIntentRequest.Builder setSession​(String value)
         Required. The name of the session the query is sent to.
         Format of the session name:
         `projects/<Project ID>/agent/sessions/<Session ID>`, or
         `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
         ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
         default 'draft' environment. If `User ID` is not specified, we are using
         "-". It's up to the API caller to choose an appropriate `Session ID` and
         `User Id`. They can be a random number or some type of user and session
         identifiers (preferably hashed). The length of the `Session ID` and
         `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Parameters:
        value - The session to set.
        Returns:
        This builder for chaining.
      • clearSession

        public StreamingDetectIntentRequest.Builder clearSession()
         Required. The name of the session the query is sent to.
         Format of the session name:
         `projects/<Project ID>/agent/sessions/<Session ID>`, or
         `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
         ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
         default 'draft' environment. If `User ID` is not specified, we are using
         "-". It's up to the API caller to choose an appropriate `Session ID` and
         `User Id`. They can be a random number or some type of user and session
         identifiers (preferably hashed). The length of the `Session ID` and
         `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Returns:
        This builder for chaining.
      • setSessionBytes

        public StreamingDetectIntentRequest.Builder setSessionBytes​(com.google.protobuf.ByteString value)
         Required. The name of the session the query is sent to.
         Format of the session name:
         `projects/<Project ID>/agent/sessions/<Session ID>`, or
         `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
         ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
         default 'draft' environment. If `User ID` is not specified, we are using
         "-". It's up to the API caller to choose an appropriate `Session ID` and
         `User Id`. They can be a random number or some type of user and session
         identifiers (preferably hashed). The length of the `Session ID` and
         `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Parameters:
        value - The bytes for session to set.
        Returns:
        This builder for chaining.
      • hasQueryParams

        public boolean hasQueryParams()
         The parameters of this query.
         
        .google.cloud.dialogflow.v2.QueryParameters query_params = 2;
        Specified by:
        hasQueryParams in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        Whether the queryParams field is set.
      • getQueryParamsBuilder

        public QueryParameters.Builder getQueryParamsBuilder()
         The parameters of this query.
         
        .google.cloud.dialogflow.v2.QueryParameters query_params = 2;
      • hasQueryInput

        public boolean hasQueryInput()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
        Specified by:
        hasQueryInput in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        Whether the queryInput field is set.
      • getQueryInput

        public QueryInput getQueryInput()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
        Specified by:
        getQueryInput in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The queryInput.
      • setQueryInput

        public StreamingDetectIntentRequest.Builder setQueryInput​(QueryInput value)
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • setQueryInput

        public StreamingDetectIntentRequest.Builder setQueryInput​(QueryInput.Builder builderForValue)
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • mergeQueryInput

        public StreamingDetectIntentRequest.Builder mergeQueryInput​(QueryInput value)
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • clearQueryInput

        public StreamingDetectIntentRequest.Builder clearQueryInput()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • getQueryInputBuilder

        public QueryInput.Builder getQueryInputBuilder()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • getQueryInputOrBuilder

        public QueryInputOrBuilder getQueryInputOrBuilder()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
        Specified by:
        getQueryInputOrBuilder in interface StreamingDetectIntentRequestOrBuilder
      • getSingleUtterance

        @Deprecated
        public boolean getSingleUtterance()
        Deprecated.
        google.cloud.dialogflow.v2.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2/session.proto;l=469
         Please use
         [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2.InputAudioConfig.single_utterance]
         instead. If `false` (default), recognition does not cease until the client
         closes the stream. If `true`, the recognizer will detect a single spoken
         utterance in input audio. Recognition ceases when it detects the audio's
         voice has stopped or paused. In this case, once a detected intent is
         received, the client should close the stream and start a new request with a
         new stream as needed. This setting is ignored when `query_input` is a piece
         of text or an event.
         
        bool single_utterance = 4 [deprecated = true];
        Specified by:
        getSingleUtterance in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The singleUtterance.
      • setSingleUtterance

        @Deprecated
        public StreamingDetectIntentRequest.Builder setSingleUtterance​(boolean value)
        Deprecated.
        google.cloud.dialogflow.v2.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2/session.proto;l=469
         Please use
         [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2.InputAudioConfig.single_utterance]
         instead. If `false` (default), recognition does not cease until the client
         closes the stream. If `true`, the recognizer will detect a single spoken
         utterance in input audio. Recognition ceases when it detects the audio's
         voice has stopped or paused. In this case, once a detected intent is
         received, the client should close the stream and start a new request with a
         new stream as needed. This setting is ignored when `query_input` is a piece
         of text or an event.
         
        bool single_utterance = 4 [deprecated = true];
        Parameters:
        value - The singleUtterance to set.
        Returns:
        This builder for chaining.
      • clearSingleUtterance

        @Deprecated
        public StreamingDetectIntentRequest.Builder clearSingleUtterance()
        Deprecated.
        google.cloud.dialogflow.v2.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2/session.proto;l=469
         Please use
         [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2.InputAudioConfig.single_utterance]
         instead. If `false` (default), recognition does not cease until the client
         closes the stream. If `true`, the recognizer will detect a single spoken
         utterance in input audio. Recognition ceases when it detects the audio's
         voice has stopped or paused. In this case, once a detected intent is
         received, the client should close the stream and start a new request with a
         new stream as needed. This setting is ignored when `query_input` is a piece
         of text or an event.
         
        bool single_utterance = 4 [deprecated = true];
        Returns:
        This builder for chaining.
      • hasOutputAudioConfig

        public boolean hasOutputAudioConfig()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
        Specified by:
        hasOutputAudioConfig in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        Whether the outputAudioConfig field is set.
      • getOutputAudioConfig

        public OutputAudioConfig getOutputAudioConfig()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
        Specified by:
        getOutputAudioConfig in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The outputAudioConfig.
      • setOutputAudioConfig

        public StreamingDetectIntentRequest.Builder setOutputAudioConfig​(OutputAudioConfig value)
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
      • setOutputAudioConfig

        public StreamingDetectIntentRequest.Builder setOutputAudioConfig​(OutputAudioConfig.Builder builderForValue)
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
      • mergeOutputAudioConfig

        public StreamingDetectIntentRequest.Builder mergeOutputAudioConfig​(OutputAudioConfig value)
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
      • clearOutputAudioConfig

        public StreamingDetectIntentRequest.Builder clearOutputAudioConfig()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
      • getOutputAudioConfigBuilder

        public OutputAudioConfig.Builder getOutputAudioConfigBuilder()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2.OutputAudioConfig output_audio_config = 5;
      • hasOutputAudioConfigMask

        public boolean hasOutputAudioConfigMask()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
        Specified by:
        hasOutputAudioConfigMask in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        Whether the outputAudioConfigMask field is set.
      • getOutputAudioConfigMask

        public com.google.protobuf.FieldMask getOutputAudioConfigMask()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
        Specified by:
        getOutputAudioConfigMask in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The outputAudioConfigMask.
      • setOutputAudioConfigMask

        public StreamingDetectIntentRequest.Builder setOutputAudioConfigMask​(com.google.protobuf.FieldMask value)
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • setOutputAudioConfigMask

        public StreamingDetectIntentRequest.Builder setOutputAudioConfigMask​(com.google.protobuf.FieldMask.Builder builderForValue)
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • mergeOutputAudioConfigMask

        public StreamingDetectIntentRequest.Builder mergeOutputAudioConfigMask​(com.google.protobuf.FieldMask value)
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • clearOutputAudioConfigMask

        public StreamingDetectIntentRequest.Builder clearOutputAudioConfigMask()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • getOutputAudioConfigMaskBuilder

        public com.google.protobuf.FieldMask.Builder getOutputAudioConfigMaskBuilder()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • getOutputAudioConfigMaskOrBuilder

        public com.google.protobuf.FieldMaskOrBuilder getOutputAudioConfigMaskOrBuilder()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
        Specified by:
        getOutputAudioConfigMaskOrBuilder in interface StreamingDetectIntentRequestOrBuilder
      • getInputAudio

        public com.google.protobuf.ByteString getInputAudio()
         The input audio content to be recognized. Must be sent if
         `query_input` was set to a streaming input audio config. The complete audio
         over all streaming messages must not exceed 1 minute.
         
        bytes input_audio = 6;
        Specified by:
        getInputAudio in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The inputAudio.
      • setInputAudio

        public StreamingDetectIntentRequest.Builder setInputAudio​(com.google.protobuf.ByteString value)
         The input audio content to be recognized. Must be sent if
         `query_input` was set to a streaming input audio config. The complete audio
         over all streaming messages must not exceed 1 minute.
         
        bytes input_audio = 6;
        Parameters:
        value - The inputAudio to set.
        Returns:
        This builder for chaining.
      • clearInputAudio

        public StreamingDetectIntentRequest.Builder clearInputAudio()
         The input audio content to be recognized. Must be sent if
         `query_input` was set to a streaming input audio config. The complete audio
         over all streaming messages must not exceed 1 minute.
         
        bytes input_audio = 6;
        Returns:
        This builder for chaining.
      • getEnableDebuggingInfo

        public boolean getEnableDebuggingInfo()
         if true, `StreamingDetectIntentResponse.debugging_info` will get populated.
         
        bool enable_debugging_info = 8;
        Specified by:
        getEnableDebuggingInfo in interface StreamingDetectIntentRequestOrBuilder
        Returns:
        The enableDebuggingInfo.
      • setEnableDebuggingInfo

        public StreamingDetectIntentRequest.Builder setEnableDebuggingInfo​(boolean value)
         if true, `StreamingDetectIntentResponse.debugging_info` will get populated.
         
        bool enable_debugging_info = 8;
        Parameters:
        value - The enableDebuggingInfo to set.
        Returns:
        This builder for chaining.
      • clearEnableDebuggingInfo

        public StreamingDetectIntentRequest.Builder clearEnableDebuggingInfo()
         if true, `StreamingDetectIntentResponse.debugging_info` will get populated.
         
        bool enable_debugging_info = 8;
        Returns:
        This builder for chaining.