Interface StreamingDetectIntentRequestOrBuilder

    • Method Detail

      • getSession

        String getSession()
         Required. The name of the session the query is sent to.
         Supported formats:
         - `projects/<Project ID>/agent/sessions/<Session ID>,
         - `projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
           ID>`,
         - `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
           ID>/sessions/<Session ID>`,
         - `projects/<Project ID>/locations/<Location
           ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session
           ID>`,
        
         If `Location ID` is not specified we assume default 'us' location. If
         `Environment ID` is not specified, we assume default 'draft' environment.
         If `User ID` is not specified, we are using "-". It's up to the API caller
         to choose an appropriate `Session ID` and `User Id`. They can be a random
         number or some type of user and session identifiers (preferably hashed).
         The length of the `Session ID` and `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Returns:
        The session.
      • getSessionBytes

        com.google.protobuf.ByteString getSessionBytes()
         Required. The name of the session the query is sent to.
         Supported formats:
         - `projects/<Project ID>/agent/sessions/<Session ID>,
         - `projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session
           ID>`,
         - `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
           ID>/sessions/<Session ID>`,
         - `projects/<Project ID>/locations/<Location
           ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session
           ID>`,
        
         If `Location ID` is not specified we assume default 'us' location. If
         `Environment ID` is not specified, we assume default 'draft' environment.
         If `User ID` is not specified, we are using "-". It's up to the API caller
         to choose an appropriate `Session ID` and `User Id`. They can be a random
         number or some type of user and session identifiers (preferably hashed).
         The length of the `Session ID` and `User ID` must not exceed 36 characters.
        
         For more information, see the [API interactions
         guide](https://cloud.google.com/dialogflow/docs/api-overview).
        
         Note: Always use agent versions for production traffic.
         See [Versions and
         environments](https://cloud.google.com/dialogflow/es/docs/agents-versions).
         
        string session = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = { ... }
        Returns:
        The bytes for session.
      • hasQueryParams

        boolean hasQueryParams()
         The parameters of this query.
         
        .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
        Returns:
        Whether the queryParams field is set.
      • getQueryParams

        QueryParameters getQueryParams()
         The parameters of this query.
         
        .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
        Returns:
        The queryParams.
      • getQueryParamsOrBuilder

        QueryParametersOrBuilder getQueryParamsOrBuilder()
         The parameters of this query.
         
        .google.cloud.dialogflow.v2beta1.QueryParameters query_params = 2;
      • hasQueryInput

        boolean hasQueryInput()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
        Returns:
        Whether the queryInput field is set.
      • getQueryInput

        QueryInput getQueryInput()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
        Returns:
        The queryInput.
      • getQueryInputOrBuilder

        QueryInputOrBuilder getQueryInputOrBuilder()
         Required. The input specification. It can be set to:
        
         1. an audio config which instructs the speech recognizer how to process
         the speech audio,
        
         2. a conversational query in the form of text, or
        
         3. an event that specifies which intent to trigger.
         
        .google.cloud.dialogflow.v2beta1.QueryInput query_input = 3 [(.google.api.field_behavior) = REQUIRED];
      • getSingleUtterance

        @Deprecated
        boolean getSingleUtterance()
        Deprecated.
        google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.single_utterance is deprecated. See google/cloud/dialogflow/v2beta1/session.proto;l=569
         DEPRECATED. Please use
         [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2beta1.InputAudioConfig.single_utterance]
         instead. If `false` (default), recognition does not cease until the client
         closes the stream. If `true`, the recognizer will detect a single spoken
         utterance in input audio. Recognition ceases when it detects the audio's
         voice has stopped or paused. In this case, once a detected intent is
         received, the client should close the stream and start a new request with a
         new stream as needed. This setting is ignored when `query_input` is a piece
         of text or an event.
         
        bool single_utterance = 4 [deprecated = true];
        Returns:
        The singleUtterance.
      • hasOutputAudioConfig

        boolean hasOutputAudioConfig()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
        Returns:
        Whether the outputAudioConfig field is set.
      • getOutputAudioConfig

        OutputAudioConfig getOutputAudioConfig()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
        Returns:
        The outputAudioConfig.
      • getOutputAudioConfigOrBuilder

        OutputAudioConfigOrBuilder getOutputAudioConfigOrBuilder()
         Instructs the speech synthesizer how to generate the output
         audio. If this field is not set and agent-level speech synthesizer is not
         configured, no output audio is generated.
         
        .google.cloud.dialogflow.v2beta1.OutputAudioConfig output_audio_config = 5;
      • hasOutputAudioConfigMask

        boolean hasOutputAudioConfigMask()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
        Returns:
        Whether the outputAudioConfigMask field is set.
      • getOutputAudioConfigMask

        com.google.protobuf.FieldMask getOutputAudioConfigMask()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
        Returns:
        The outputAudioConfigMask.
      • getOutputAudioConfigMaskOrBuilder

        com.google.protobuf.FieldMaskOrBuilder getOutputAudioConfigMaskOrBuilder()
         Mask for
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         indicating which settings in this request-level config should override
         speech synthesizer settings defined at agent-level.
        
         If unspecified or empty,
         [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]
         replaces the agent-level config in its entirety.
         
        .google.protobuf.FieldMask output_audio_config_mask = 7;
      • getInputAudio

        com.google.protobuf.ByteString getInputAudio()
         The input audio content to be recognized. Must be sent if
         `query_input` was set to a streaming input audio config. The complete audio
         over all streaming messages must not exceed 1 minute.
         
        bytes input_audio = 6;
        Returns:
        The inputAudio.
      • getEnableDebuggingInfo

        boolean getEnableDebuggingInfo()
         If true, `StreamingDetectIntentResponse.debugging_info` will get populated.
         
        bool enable_debugging_info = 8;
        Returns:
        The enableDebuggingInfo.