Struct google_api_proto::google::cloud::dialogflow::v2beta1::InputAudioConfig
source · pub struct InputAudioConfig {Show 14 fields
pub audio_encoding: i32,
pub sample_rate_hertz: i32,
pub language_code: String,
pub enable_word_info: bool,
pub phrase_hints: Vec<String>,
pub speech_contexts: Vec<SpeechContext>,
pub model: String,
pub model_variant: i32,
pub single_utterance: bool,
pub disable_no_speech_recognized_event: bool,
pub barge_in_config: Option<BargeInConfig>,
pub enable_automatic_punctuation: bool,
pub default_no_speech_timeout: Option<Duration>,
pub opt_out_conformer_model_migration: bool,
}
Expand description
Instructs the speech recognizer on how to process the audio content.
Fields§
§audio_encoding: i32
Required. Audio encoding of the audio content to process.
sample_rate_hertz: i32
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
language_code: String
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
enable_word_info: bool
If true
, Dialogflow returns
[SpeechWordInfo][google.cloud.dialogflow.v2beta1.SpeechWordInfo] in
[StreamingRecognitionResult][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult]
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn’t return any word-level
information.
phrase_hints: Vec<String>
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
speech_contexts: Vec<SpeechContext>
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
model: String
Optional. Which Speech model to select for the given request. For more information, see Speech models.
model_variant: i32
Which variant of the [Speech model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] to use.
single_utterance: bool
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio’s voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
disable_no_speech_recognized_event: bool
Only used in
[Participants.AnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.AnalyzeContent]
and
[Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent].
If false
and recognition doesn’t return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
barge_in_config: Option<BargeInConfig>
Configuration of barge-in behavior during the streaming of input audio.
enable_automatic_punctuation: bool
Enable automatic punctuation option at the speech backend.
default_no_speech_timeout: Option<Duration>
If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself.
opt_out_conformer_model_migration: bool
If true
, the request will opt out for STT conformer model migration.
This field will be deprecated once force migration takes place in June
2024. Please refer to Dialogflow ES Speech model
migration.
Implementations§
source§impl InputAudioConfig
impl InputAudioConfig
sourcepub fn audio_encoding(&self) -> AudioEncoding
pub fn audio_encoding(&self) -> AudioEncoding
Returns the enum value of audio_encoding
, or the default if the field is set to an invalid enum value.
sourcepub fn set_audio_encoding(&mut self, value: AudioEncoding)
pub fn set_audio_encoding(&mut self, value: AudioEncoding)
Sets audio_encoding
to the provided enum value.
sourcepub fn model_variant(&self) -> SpeechModelVariant
pub fn model_variant(&self) -> SpeechModelVariant
Returns the enum value of model_variant
, or the default if the field is set to an invalid enum value.
sourcepub fn set_model_variant(&mut self, value: SpeechModelVariant)
pub fn set_model_variant(&mut self, value: SpeechModelVariant)
Sets model_variant
to the provided enum value.
Trait Implementations§
source§impl Clone for InputAudioConfig
impl Clone for InputAudioConfig
source§fn clone(&self) -> InputAudioConfig
fn clone(&self) -> InputAudioConfig
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for InputAudioConfig
impl Debug for InputAudioConfig
source§impl Default for InputAudioConfig
impl Default for InputAudioConfig
source§impl Message for InputAudioConfig
impl Message for InputAudioConfig
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for InputAudioConfig
impl PartialEq for InputAudioConfig
source§fn eq(&self, other: &InputAudioConfig) -> bool
fn eq(&self, other: &InputAudioConfig) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for InputAudioConfig
Auto Trait Implementations§
impl Freeze for InputAudioConfig
impl RefUnwindSafe for InputAudioConfig
impl Send for InputAudioConfig
impl Sync for InputAudioConfig
impl Unpin for InputAudioConfig
impl UnwindSafe for InputAudioConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request