Struct google_api_proto::google::cloud::dialogflow::v2::StreamingAnalyzeContentRequest
source · pub struct StreamingAnalyzeContentRequest {
pub participant: String,
pub reply_audio_config: Option<OutputAudioConfig>,
pub query_params: Option<QueryParameters>,
pub assist_query_params: Option<AssistQueryParameters>,
pub cx_parameters: Option<Struct>,
pub enable_extended_streaming: bool,
pub enable_partial_automated_agent_reply: bool,
pub enable_debugging_info: bool,
pub config: Option<Config>,
pub input: Option<Input>,
}
Expand description
The top-level message sent by the client to the [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2.Participants.StreamingAnalyzeContent] method.
Multiple request messages should be sent in order:
-
The first message must contain [participant][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.participant], [config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] and optionally [query_params][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.query_params]. If you want to receive an audio response, it should also contain [reply_audio_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.reply_audio_config]. The message must not contain [input][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input].
-
If [config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] in the first message was set to [audio_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.audio_config], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input_audio] to continue with Speech recognition. However, note that:
- Dialogflow will bill you for the audio so far.
- Dialogflow discards all Speech recognition results in favor of the text input.
-
If [StreamingAnalyzeContentRequest.config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] in the first message was set to [StreamingAnalyzeContentRequest.text_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.text_config], then the second message must contain only [input_text][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input_text]. Moreover, you must not send more than two messages.
After you sent all input, you must half-close or abort the request stream.
Fields§
§participant: String
Required. The name of the participant this text comes from.
Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>
.
reply_audio_config: Option<OutputAudioConfig>
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
query_params: Option<QueryParameters>
Parameters for a Dialogflow virtual-agent query.
assist_query_params: Option<AssistQueryParameters>
Parameters for a human assist query.
cx_parameters: Option<Struct>
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.
Note: this field should only be used if you are connecting to a Dialogflow CX agent.
enable_extended_streaming: bool
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there’s no need to half close the stream to get the response.
Restrictions:
- Timeout: 3 mins.
- Audio Encoding: only supports [AudioEncoding.AUDIO_ENCODING_LINEAR_16][google.cloud.dialogflow.v2.AudioEncoding.AUDIO_ENCODING_LINEAR_16] and [AudioEncoding.AUDIO_ENCODING_MULAW][google.cloud.dialogflow.v2.AudioEncoding.AUDIO_ENCODING_MULAW]
- Lifecycle: conversation should be in
Assist Stage
, go to [Conversation.CreateConversation][] for more information.
InvalidArgument Error will be returned if the one of restriction checks failed.
You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
enable_partial_automated_agent_reply: bool
Enable partial virtual agent responses. If this flag is not enabled,
response stream still contains only one final response even if some
Fulfillment
s in Dialogflow virtual agent have been configured to return
partial responses.
enable_debugging_info: bool
If true, StreamingAnalyzeContentResponse.debugging_info
will get
populated.
config: Option<Config>
The input config.
input: Option<Input>
The input.
Trait Implementations§
source§impl Clone for StreamingAnalyzeContentRequest
impl Clone for StreamingAnalyzeContentRequest
source§fn clone(&self) -> StreamingAnalyzeContentRequest
fn clone(&self) -> StreamingAnalyzeContentRequest
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Message for StreamingAnalyzeContentRequest
impl Message for StreamingAnalyzeContentRequest
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for StreamingAnalyzeContentRequest
impl PartialEq for StreamingAnalyzeContentRequest
source§fn eq(&self, other: &StreamingAnalyzeContentRequest) -> bool
fn eq(&self, other: &StreamingAnalyzeContentRequest) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for StreamingAnalyzeContentRequest
Auto Trait Implementations§
impl !Freeze for StreamingAnalyzeContentRequest
impl RefUnwindSafe for StreamingAnalyzeContentRequest
impl Send for StreamingAnalyzeContentRequest
impl Sync for StreamingAnalyzeContentRequest
impl Unpin for StreamingAnalyzeContentRequest
impl UnwindSafe for StreamingAnalyzeContentRequest
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request