Module google_api_proto::google::cloud::speech::v2

source ·

Modules§

Structs§

  • The access metadata for a particular region. This can be applied if the org policy for the given project disallows a particular region.
  • Automatically detected decoding parameters. Supported for the following encodings:
  • Metadata about a single file in a batch for BatchRecognize.
  • Final results for a single file.
  • Operation metadata for [BatchRecognize][google.cloud.speech.v2.Speech.BatchRecognize].
  • Request message for the [BatchRecognize][google.cloud.speech.v2.Speech.BatchRecognize] method.
  • Response message for [BatchRecognize][google.cloud.speech.v2.Speech.BatchRecognize] that is packaged into a longrunning [Operation][google.longrunning.Operation].
  • Output type for Cloud Storage of BatchRecognize transcripts. Though this proto isn’t returned in this API anywhere, the Cloud Storage transcripts will be this proto serialized and should be parsed as such.
  • Metadata about transcription for a single file (for example, progress percent).
  • Final results written to Cloud Storage.
  • Message representing the config for the Speech-to-Text API. This includes an optional KMS key with which incoming data will be encrypted.
  • Request message for the [CreateCustomClass][google.cloud.speech.v2.Speech.CreateCustomClass] method.
  • Request message for the [CreatePhraseSet][google.cloud.speech.v2.Speech.CreatePhraseSet] method.
  • Request message for the [CreateRecognizer][google.cloud.speech.v2.Speech.CreateRecognizer] method.
  • CustomClass for biasing in speech recognition. Used to define a set of words or phrases that represents a common concept or theme likely to appear in your audio, for example a list of passenger ship names.
  • Request message for the [DeleteCustomClass][google.cloud.speech.v2.Speech.DeleteCustomClass] method.
  • Request message for the [DeletePhraseSet][google.cloud.speech.v2.Speech.DeletePhraseSet] method.
  • Request message for the [DeleteRecognizer][google.cloud.speech.v2.Speech.DeleteRecognizer] method.
  • Explicitly specified decoding parameters.
  • Output configurations for Cloud Storage.
  • Request message for the [GetConfig][google.cloud.speech.v2.Speech.GetConfig] method.
  • Request message for the [GetCustomClass][google.cloud.speech.v2.Speech.GetCustomClass] method.
  • Request message for the [GetPhraseSet][google.cloud.speech.v2.Speech.GetPhraseSet] method.
  • Request message for the [GetRecognizer][google.cloud.speech.v2.Speech.GetRecognizer] method.
  • Output configurations for inline response.
  • Final results returned inline in the recognition response.
  • The metadata about locales available in a given region. Currently this is just the models that are available for each locale
  • Request message for the [ListCustomClasses][google.cloud.speech.v2.Speech.ListCustomClasses] method.
  • Response message for the [ListCustomClasses][google.cloud.speech.v2.Speech.ListCustomClasses] method.
  • Request message for the [ListPhraseSets][google.cloud.speech.v2.Speech.ListPhraseSets] method.
  • Response message for the [ListPhraseSets][google.cloud.speech.v2.Speech.ListPhraseSets] method.
  • Request message for the [ListRecognizers][google.cloud.speech.v2.Speech.ListRecognizers] method.
  • Response message for the [ListRecognizers][google.cloud.speech.v2.Speech.ListRecognizers] method.
  • Main metadata for the Locations API for STT V2. Currently this is just the metadata about locales, models, and features
  • Representes a singular feature of a model. If the feature is recognizer, the release_state of the feature represents the release_state of the model
  • Represents the collection of features belonging to a model
  • The metadata about the models in a given region for a specific locale. Currently this is just the features of the model
  • Output configurations for serialized BatchRecognizeResults protos.
  • Represents the metadata of a long-running operation.
  • Configuration for the format of the results stored to output.
  • PhraseSet for biasing in speech recognition. A PhraseSet is used to provide “hints” to the speech recognizer to favor specific words and phrases in the results.
  • Provides information to the Recognizer that specifies how to process the recognition request.
  • Available recognition features.
  • Configuration options for the output(s) of recognition.
  • Metadata about the recognition request and response.
  • Request message for the [Recognize][google.cloud.speech.v2.Speech.Recognize] method. Either content or uri must be supplied. Supplying both or neither returns [INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]. See content limits.
  • Response message for the [Recognize][google.cloud.speech.v2.Speech.Recognize] method.
  • A Recognizer message. Stores recognition configuration and metadata.
  • Configuration to enable speaker diarization.
  • Provides “hints” to the speech recognizer to favor specific words and phrases in the results. PhraseSets can be specified as an inline resource, or a reference to an existing PhraseSet resource.
  • Alternative hypotheses (a.k.a. n-best list).
  • A speech recognition result corresponding to a portion of the audio.
  • Output configurations SubRip Text formatted subtitle file.
  • Provides configuration information for the StreamingRecognize request.
  • Available recognition features specific to streaming recognition requests.
  • A streaming speech recognition result corresponding to a portion of the audio that is currently being processed.
  • Request message for the [StreamingRecognize][google.cloud.speech.v2.Speech.StreamingRecognize] method. Multiple [StreamingRecognizeRequest][google.cloud.speech.v2.StreamingRecognizeRequest] messages are sent in one call.
  • StreamingRecognizeResponse is the only message returned to the client by StreamingRecognize. A series of zero or more StreamingRecognizeResponse messages are streamed back to the client. If there is no recognizable audio then no messages are streamed back to the client.
  • Transcription normalization configuration. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
  • Translation configuration. Use to translate the given audio into text for the desired language.
  • Request message for the [UndeleteCustomClass][google.cloud.speech.v2.Speech.UndeleteCustomClass] method.
  • Request message for the [UndeletePhraseSet][google.cloud.speech.v2.Speech.UndeletePhraseSet] method.
  • Request message for the [UndeleteRecognizer][google.cloud.speech.v2.Speech.UndeleteRecognizer] method.
  • Request message for the [UpdateConfig][google.cloud.speech.v2.Speech.UpdateConfig] method.
  • Request message for the [UpdateCustomClass][google.cloud.speech.v2.Speech.UpdateCustomClass] method.
  • Request message for the [UpdatePhraseSet][google.cloud.speech.v2.Speech.UpdatePhraseSet] method.
  • Request message for the [UpdateRecognizer][google.cloud.speech.v2.Speech.UpdateRecognizer] method.
  • Output configurations for WebVTT formatted subtitle file.
  • Word-specific information for recognized words.