Specifies how to process the AssistRequest
messages.
The top-level message sent by the client. Clients must send at least two, and
typically numerous AssistRequest
messages. The first message must
contain a config
message and must not contain audio_in
data. All
subsequent messages must contain audio_in
data and must not contain a
config
message.
The top-level message received by the client. A series of one or more
AssistResponse
messages are streamed back to the client.
Specifies how to process the
audio_in
data that will be provided in
subsequent requests. For recommended settings, see the Google Assistant SDK
best
practices.
The audio containing the Assistant’s response to the query. Sequential chunks
of audio data are received in sequential AssistResponse
messages.
Specifies the desired format for the server to use when it returns
audio_out
messages.
Debugging parameters for the current request.
Debug info for developer. Only returned if request set return_debug_info
to true.
The response returned to the device if the user has triggered a Device
Action. For example, a device which supports the query Turn on the light
would receive a DeviceAction
with a JSON payload containing the semantics
of the request.
Required Fields that identify the device to the Assistant.
There are three sources of locations. They are used with this precedence:
Provides information about the current dialog state.
The dialog state resulting from the user’s query. Multiple of these messages
may be received.
The Assistant’s visual output response to query. Enabled by
screen_out_config
.
Specifies the desired format for the server to use when it returns
screen_out
response.
The estimated transcription of a phrase the user has spoken. This could be
a single segment or the full guess of the user’s spoken query.