Class StreamingRecognitionResult.Builder

  • All Implemented Interfaces:
    StreamingRecognitionResultOrBuilder, com.google.protobuf.Message.Builder, com.google.protobuf.MessageLite.Builder, com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder, Cloneable
    Enclosing class:
    StreamingRecognitionResult

    public static final class StreamingRecognitionResult.Builder
    extends com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>
    implements StreamingRecognitionResultOrBuilder
     Contains a speech recognition result corresponding to a portion of the audio
     that is currently being processed or an indication that this is the end
     of the single requested utterance.
    
     While end-user audio is being processed, Dialogflow sends a series of
     results. Each result may contain a `transcript` value. A transcript
     represents a portion of the utterance. While the recognizer is processing
     audio, transcript values may be interim values or finalized values.
     Once a transcript is finalized, the `is_final` value is set to true and
     processing continues for the next transcript.
    
     If `StreamingDetectIntentRequest.query_input.audio_config.single_utterance`
     was true, and the recognizer has completed processing audio,
     the `message_type` value is set to `END_OF_SINGLE_UTTERANCE and the
     following (last) result contains the last finalized transcript.
    
     The complete end-user utterance is determined by concatenating the
     finalized transcript values received for the series of results.
    
     In the following example, single utterance is enabled. In the case where
     single utterance is not enabled, result 7 would not occur.
    
     ```
     Num | transcript              | message_type            | is_final
     --- | ----------------------- | ----------------------- | --------
     1   | "tube"                  | TRANSCRIPT              | false
     2   | "to be a"               | TRANSCRIPT              | false
     3   | "to be"                 | TRANSCRIPT              | false
     4   | "to be or not to be"    | TRANSCRIPT              | true
     5   | "that's"                | TRANSCRIPT              | false
     6   | "that is                | TRANSCRIPT              | false
     7   | unset                   | END_OF_SINGLE_UTTERANCE | unset
     8   | " that is the question" | TRANSCRIPT              | true
     ```
     Concatenating the finalized transcripts with `is_final` set to true,
     the complete utterance becomes "to be or not to be that is the question".
     
    Protobuf type google.cloud.dialogflow.v2beta1.StreamingRecognitionResult
    • Method Detail

      • getDescriptor

        public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>
      • getDescriptorForType

        public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface com.google.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface com.google.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>
      • getDefaultInstanceForType

        public StreamingRecognitionResult getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilder
      • build

        public StreamingRecognitionResult build()
        Specified by:
        build in interface com.google.protobuf.Message.Builder
        Specified by:
        build in interface com.google.protobuf.MessageLite.Builder
      • buildPartial

        public StreamingRecognitionResult buildPartial()
        Specified by:
        buildPartial in interface com.google.protobuf.Message.Builder
        Specified by:
        buildPartial in interface com.google.protobuf.MessageLite.Builder
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface com.google.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>
      • mergeFrom

        public StreamingRecognitionResult.Builder mergeFrom​(com.google.protobuf.CodedInputStream input,
                                                            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
                                                     throws IOException
        Specified by:
        mergeFrom in interface com.google.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface com.google.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class com.google.protobuf.AbstractMessage.Builder<StreamingRecognitionResult.Builder>
        Throws:
        IOException
      • getMessageTypeValue

        public int getMessageTypeValue()
         Type of the result message.
         
        .google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.MessageType message_type = 1;
        Specified by:
        getMessageTypeValue in interface StreamingRecognitionResultOrBuilder
        Returns:
        The enum numeric value on the wire for messageType.
      • setMessageTypeValue

        public StreamingRecognitionResult.Builder setMessageTypeValue​(int value)
         Type of the result message.
         
        .google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.MessageType message_type = 1;
        Parameters:
        value - The enum numeric value on the wire for messageType to set.
        Returns:
        This builder for chaining.
      • clearMessageType

        public StreamingRecognitionResult.Builder clearMessageType()
         Type of the result message.
         
        .google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.MessageType message_type = 1;
        Returns:
        This builder for chaining.
      • getTranscript

        public String getTranscript()
         Transcript text representing the words that the user spoke.
         Populated if and only if `message_type` = `TRANSCRIPT`.
         
        string transcript = 2;
        Specified by:
        getTranscript in interface StreamingRecognitionResultOrBuilder
        Returns:
        The transcript.
      • getTranscriptBytes

        public com.google.protobuf.ByteString getTranscriptBytes()
         Transcript text representing the words that the user spoke.
         Populated if and only if `message_type` = `TRANSCRIPT`.
         
        string transcript = 2;
        Specified by:
        getTranscriptBytes in interface StreamingRecognitionResultOrBuilder
        Returns:
        The bytes for transcript.
      • setTranscript

        public StreamingRecognitionResult.Builder setTranscript​(String value)
         Transcript text representing the words that the user spoke.
         Populated if and only if `message_type` = `TRANSCRIPT`.
         
        string transcript = 2;
        Parameters:
        value - The transcript to set.
        Returns:
        This builder for chaining.
      • clearTranscript

        public StreamingRecognitionResult.Builder clearTranscript()
         Transcript text representing the words that the user spoke.
         Populated if and only if `message_type` = `TRANSCRIPT`.
         
        string transcript = 2;
        Returns:
        This builder for chaining.
      • setTranscriptBytes

        public StreamingRecognitionResult.Builder setTranscriptBytes​(com.google.protobuf.ByteString value)
         Transcript text representing the words that the user spoke.
         Populated if and only if `message_type` = `TRANSCRIPT`.
         
        string transcript = 2;
        Parameters:
        value - The bytes for transcript to set.
        Returns:
        This builder for chaining.
      • getIsFinal

        public boolean getIsFinal()
         If `false`, the `StreamingRecognitionResult` represents an
         interim result that may change. If `true`, the recognizer will not return
         any further hypotheses about this piece of the audio. May only be populated
         for `message_type` = `TRANSCRIPT`.
         
        bool is_final = 3;
        Specified by:
        getIsFinal in interface StreamingRecognitionResultOrBuilder
        Returns:
        The isFinal.
      • setIsFinal

        public StreamingRecognitionResult.Builder setIsFinal​(boolean value)
         If `false`, the `StreamingRecognitionResult` represents an
         interim result that may change. If `true`, the recognizer will not return
         any further hypotheses about this piece of the audio. May only be populated
         for `message_type` = `TRANSCRIPT`.
         
        bool is_final = 3;
        Parameters:
        value - The isFinal to set.
        Returns:
        This builder for chaining.
      • clearIsFinal

        public StreamingRecognitionResult.Builder clearIsFinal()
         If `false`, the `StreamingRecognitionResult` represents an
         interim result that may change. If `true`, the recognizer will not return
         any further hypotheses about this piece of the audio. May only be populated
         for `message_type` = `TRANSCRIPT`.
         
        bool is_final = 3;
        Returns:
        This builder for chaining.
      • getConfidence

        public float getConfidence()
         The Speech confidence between 0.0 and 1.0 for the current portion of audio.
         A higher number indicates an estimated greater likelihood that the
         recognized words are correct. The default of 0.0 is a sentinel value
         indicating that confidence was not set.
        
         This field is typically only provided if `is_final` is true and you should
         not rely on it being accurate or even set.
         
        float confidence = 4;
        Specified by:
        getConfidence in interface StreamingRecognitionResultOrBuilder
        Returns:
        The confidence.
      • setConfidence

        public StreamingRecognitionResult.Builder setConfidence​(float value)
         The Speech confidence between 0.0 and 1.0 for the current portion of audio.
         A higher number indicates an estimated greater likelihood that the
         recognized words are correct. The default of 0.0 is a sentinel value
         indicating that confidence was not set.
        
         This field is typically only provided if `is_final` is true and you should
         not rely on it being accurate or even set.
         
        float confidence = 4;
        Parameters:
        value - The confidence to set.
        Returns:
        This builder for chaining.
      • clearConfidence

        public StreamingRecognitionResult.Builder clearConfidence()
         The Speech confidence between 0.0 and 1.0 for the current portion of audio.
         A higher number indicates an estimated greater likelihood that the
         recognized words are correct. The default of 0.0 is a sentinel value
         indicating that confidence was not set.
        
         This field is typically only provided if `is_final` is true and you should
         not rely on it being accurate or even set.
         
        float confidence = 4;
        Returns:
        This builder for chaining.
      • getStability

        public float getStability()
         An estimate of the likelihood that the speech recognizer will
         not change its guess about this interim recognition result:
        
         * If the value is unspecified or 0.0, Dialogflow didn't compute the
           stability. In particular, Dialogflow will only provide stability for
           `TRANSCRIPT` results with `is_final = false`.
         * Otherwise, the value is in (0.0, 1.0] where 0.0 means completely
           unstable and 1.0 means completely stable.
         
        float stability = 6;
        Specified by:
        getStability in interface StreamingRecognitionResultOrBuilder
        Returns:
        The stability.
      • setStability

        public StreamingRecognitionResult.Builder setStability​(float value)
         An estimate of the likelihood that the speech recognizer will
         not change its guess about this interim recognition result:
        
         * If the value is unspecified or 0.0, Dialogflow didn't compute the
           stability. In particular, Dialogflow will only provide stability for
           `TRANSCRIPT` results with `is_final = false`.
         * Otherwise, the value is in (0.0, 1.0] where 0.0 means completely
           unstable and 1.0 means completely stable.
         
        float stability = 6;
        Parameters:
        value - The stability to set.
        Returns:
        This builder for chaining.
      • clearStability

        public StreamingRecognitionResult.Builder clearStability()
         An estimate of the likelihood that the speech recognizer will
         not change its guess about this interim recognition result:
        
         * If the value is unspecified or 0.0, Dialogflow didn't compute the
           stability. In particular, Dialogflow will only provide stability for
           `TRANSCRIPT` results with `is_final = false`.
         * Otherwise, the value is in (0.0, 1.0] where 0.0 means completely
           unstable and 1.0 means completely stable.
         
        float stability = 6;
        Returns:
        This builder for chaining.
      • getSpeechWordInfoList

        public List<SpeechWordInfo> getSpeechWordInfoList()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
        Specified by:
        getSpeechWordInfoList in interface StreamingRecognitionResultOrBuilder
      • getSpeechWordInfoCount

        public int getSpeechWordInfoCount()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
        Specified by:
        getSpeechWordInfoCount in interface StreamingRecognitionResultOrBuilder
      • getSpeechWordInfo

        public SpeechWordInfo getSpeechWordInfo​(int index)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
        Specified by:
        getSpeechWordInfo in interface StreamingRecognitionResultOrBuilder
      • setSpeechWordInfo

        public StreamingRecognitionResult.Builder setSpeechWordInfo​(int index,
                                                                    SpeechWordInfo value)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • setSpeechWordInfo

        public StreamingRecognitionResult.Builder setSpeechWordInfo​(int index,
                                                                    SpeechWordInfo.Builder builderForValue)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addSpeechWordInfo

        public StreamingRecognitionResult.Builder addSpeechWordInfo​(SpeechWordInfo value)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addSpeechWordInfo

        public StreamingRecognitionResult.Builder addSpeechWordInfo​(int index,
                                                                    SpeechWordInfo value)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addSpeechWordInfo

        public StreamingRecognitionResult.Builder addSpeechWordInfo​(SpeechWordInfo.Builder builderForValue)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addSpeechWordInfo

        public StreamingRecognitionResult.Builder addSpeechWordInfo​(int index,
                                                                    SpeechWordInfo.Builder builderForValue)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addAllSpeechWordInfo

        public StreamingRecognitionResult.Builder addAllSpeechWordInfo​(Iterable<? extends SpeechWordInfo> values)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • clearSpeechWordInfo

        public StreamingRecognitionResult.Builder clearSpeechWordInfo()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • removeSpeechWordInfo

        public StreamingRecognitionResult.Builder removeSpeechWordInfo​(int index)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • getSpeechWordInfoBuilder

        public SpeechWordInfo.Builder getSpeechWordInfoBuilder​(int index)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • getSpeechWordInfoOrBuilder

        public SpeechWordInfoOrBuilder getSpeechWordInfoOrBuilder​(int index)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
        Specified by:
        getSpeechWordInfoOrBuilder in interface StreamingRecognitionResultOrBuilder
      • getSpeechWordInfoOrBuilderList

        public List<? extends SpeechWordInfoOrBuilder> getSpeechWordInfoOrBuilderList()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
        Specified by:
        getSpeechWordInfoOrBuilderList in interface StreamingRecognitionResultOrBuilder
      • addSpeechWordInfoBuilder

        public SpeechWordInfo.Builder addSpeechWordInfoBuilder()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • addSpeechWordInfoBuilder

        public SpeechWordInfo.Builder addSpeechWordInfoBuilder​(int index)
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • getSpeechWordInfoBuilderList

        public List<SpeechWordInfo.Builder> getSpeechWordInfoBuilderList()
         Word-specific information for the words recognized by Speech in
         [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript].
         Populated if and only if `message_type` = `TRANSCRIPT` and
         [InputAudioConfig.enable_word_info] is set.
         
        repeated .google.cloud.dialogflow.v2beta1.SpeechWordInfo speech_word_info = 7;
      • hasSpeechEndOffset

        public boolean hasSpeechEndOffset()
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
        Specified by:
        hasSpeechEndOffset in interface StreamingRecognitionResultOrBuilder
        Returns:
        Whether the speechEndOffset field is set.
      • getSpeechEndOffset

        public com.google.protobuf.Duration getSpeechEndOffset()
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
        Specified by:
        getSpeechEndOffset in interface StreamingRecognitionResultOrBuilder
        Returns:
        The speechEndOffset.
      • setSpeechEndOffset

        public StreamingRecognitionResult.Builder setSpeechEndOffset​(com.google.protobuf.Duration value)
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
      • setSpeechEndOffset

        public StreamingRecognitionResult.Builder setSpeechEndOffset​(com.google.protobuf.Duration.Builder builderForValue)
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
      • mergeSpeechEndOffset

        public StreamingRecognitionResult.Builder mergeSpeechEndOffset​(com.google.protobuf.Duration value)
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
      • clearSpeechEndOffset

        public StreamingRecognitionResult.Builder clearSpeechEndOffset()
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
      • getSpeechEndOffsetBuilder

        public com.google.protobuf.Duration.Builder getSpeechEndOffsetBuilder()
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
      • getSpeechEndOffsetOrBuilder

        public com.google.protobuf.DurationOrBuilder getSpeechEndOffsetOrBuilder()
         Time offset of the end of this Speech recognition result relative to the
         beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
         
        .google.protobuf.Duration speech_end_offset = 8;
        Specified by:
        getSpeechEndOffsetOrBuilder in interface StreamingRecognitionResultOrBuilder
      • getLanguageCodeBytes

        public com.google.protobuf.ByteString getLanguageCodeBytes()
         Detected language code for the transcript.
         
        string language_code = 10;
        Specified by:
        getLanguageCodeBytes in interface StreamingRecognitionResultOrBuilder
        Returns:
        The bytes for languageCode.
      • setLanguageCode

        public StreamingRecognitionResult.Builder setLanguageCode​(String value)
         Detected language code for the transcript.
         
        string language_code = 10;
        Parameters:
        value - The languageCode to set.
        Returns:
        This builder for chaining.
      • clearLanguageCode

        public StreamingRecognitionResult.Builder clearLanguageCode()
         Detected language code for the transcript.
         
        string language_code = 10;
        Returns:
        This builder for chaining.
      • setLanguageCodeBytes

        public StreamingRecognitionResult.Builder setLanguageCodeBytes​(com.google.protobuf.ByteString value)
         Detected language code for the transcript.
         
        string language_code = 10;
        Parameters:
        value - The bytes for languageCode to set.
        Returns:
        This builder for chaining.
      • hasDtmfDigits

        public boolean hasDtmfDigits()
         DTMF digits. Populated if and only if `message_type` = `DTMF_DIGITS`.
         
        .google.cloud.dialogflow.v2beta1.TelephonyDtmfEvents dtmf_digits = 5;
        Specified by:
        hasDtmfDigits in interface StreamingRecognitionResultOrBuilder
        Returns:
        Whether the dtmfDigits field is set.
      • clearDtmfDigits

        public StreamingRecognitionResult.Builder clearDtmfDigits()
         DTMF digits. Populated if and only if `message_type` = `DTMF_DIGITS`.
         
        .google.cloud.dialogflow.v2beta1.TelephonyDtmfEvents dtmf_digits = 5;
      • getDtmfDigitsBuilder

        public TelephonyDtmfEvents.Builder getDtmfDigitsBuilder()
         DTMF digits. Populated if and only if `message_type` = `DTMF_DIGITS`.
         
        .google.cloud.dialogflow.v2beta1.TelephonyDtmfEvents dtmf_digits = 5;
      • setUnknownFields

        public final StreamingRecognitionResult.Builder setUnknownFields​(com.google.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        setUnknownFields in interface com.google.protobuf.Message.Builder
        Overrides:
        setUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>
      • mergeUnknownFields

        public final StreamingRecognitionResult.Builder mergeUnknownFields​(com.google.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        mergeUnknownFields in interface com.google.protobuf.Message.Builder
        Overrides:
        mergeUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder>