Struct google_api_proto::google::cloud::mediatranslation::v1alpha1::StreamingTranslateSpeechConfig
source · pub struct StreamingTranslateSpeechConfig {
pub audio_config: Option<TranslateSpeechConfig>,
pub single_utterance: bool,
pub stability: String,
pub translation_mode: String,
pub disable_interim_results: bool,
}
Expand description
Config used for streaming translation.
Fields§
§audio_config: Option<TranslateSpeechConfig>
Required. The common config for all the following audio contents.
single_utterance: bool
Optional. If false
or omitted, the system performs
continuous translation (continuing to wait for and process audio even if
the user pauses speaking) until the client closes the input stream (gRPC
API) or until the maximum time limit has been reached. May return multiple
StreamingTranslateSpeechResult
s with the is_final
flag set to true
.
If true
, the speech translator will detect a single spoken utterance.
When it detects that the user has paused or stopped speaking, it will
return an END_OF_SINGLE_UTTERANCE
event and cease translation.
When the client receives END_OF_SINGLE_UTTERANCE
event, the client should
stop sending the requests. However, clients should keep receiving remaining
responses until the stream is terminated. To construct the complete
sentence in a streaming way, one should override (if is_final
of previous
response is false), or append (if ‘is_final’ of previous response is true).
stability: String
Optional. Stability control for the media translation text. The value should be “LOW”, “MEDIUM”, “HIGH”. It applies to text/text_and_audio translation only. For audio translation mode, we only support HIGH stability mode, low/medium stability mode will throw argument error. Default empty string will be treated as “HIGH” in audio translation mode; will be treated as “LOW” in other translation mode. Note that stability and speed would be trade off.
- “LOW”: In low mode, translation service will start to do translation right after getting recognition response. The speed will be faster.
- “MEDIUM”: In medium mode, translation service will check if the recognition response is stable enough or not, and only translate recognition response which is not likely to be changed later.
- “HIGH”: In high mode, translation service will wait for more stable recognition responses, and then start to do translation. Also, the following recognition responses cannot modify previous recognition responses. Thus it may impact quality in some situation. “HIGH” stability will generate “final” responses more frequently.
translation_mode: String
Optional. Translation mode, the value should be “text”, “audio”, “text_and_audio”. Default empty string will be treated as “text”.
- “text”: The response will be text translation. Text translation has a
field “is_final”. Detailed definition can be found in
TextTranslationResult
. - “audio”: The response will be audio translation. Audio translation does not have “is_final” field, which means each audio translation response is stable and will not be changed by later response. Translation mode “audio” can only be used with “high” stability mode,
- “text_and_audio”: The response will have a text translation, when “is_final” is true, we will also output its corresponding audio translation. When “is_final” is false, audio_translation field will be empty.
disable_interim_results: bool
Optional. If disable_interim_results is true, we will only return “final” responses. Otherwise, we will return all the responses. Default value will be false. User can only set disable_interim_results to be true with “high” stability mode.
Trait Implementations§
source§impl Clone for StreamingTranslateSpeechConfig
impl Clone for StreamingTranslateSpeechConfig
source§fn clone(&self) -> StreamingTranslateSpeechConfig
fn clone(&self) -> StreamingTranslateSpeechConfig
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Message for StreamingTranslateSpeechConfig
impl Message for StreamingTranslateSpeechConfig
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for StreamingTranslateSpeechConfig
impl PartialEq for StreamingTranslateSpeechConfig
source§fn eq(&self, other: &StreamingTranslateSpeechConfig) -> bool
fn eq(&self, other: &StreamingTranslateSpeechConfig) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for StreamingTranslateSpeechConfig
Auto Trait Implementations§
impl Freeze for StreamingTranslateSpeechConfig
impl RefUnwindSafe for StreamingTranslateSpeechConfig
impl Send for StreamingTranslateSpeechConfig
impl Sync for StreamingTranslateSpeechConfig
impl Unpin for StreamingTranslateSpeechConfig
impl UnwindSafe for StreamingTranslateSpeechConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request