Module google_api_proto::google::cloud::vision::v1p2beta1

source ·

Modules§

Structs§

  • Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
  • Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.
  • Response to an image annotation request.
  • An offline file annotation request.
  • The response for a single offline file annotation request.
  • Multiple async file annotation requests are batched into a single service call.
  • Response to an async batch file annotation request.
  • Multiple image annotation requests are batched into a single service call.
  • Response to a batch image annotation request.
  • Logical element on the page.
  • A bounding polygon for the detected image annotation.
  • Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
  • Single crop hint that is used to generate a new crop when serving an image.
  • Set of crop hints that are used to generate new crops when serving images.
  • Parameters for crop hints annotation request.
  • Set of dominant colors and their corresponding scores.
  • Set of detected entity features.
  • A face annotation object contains the results of face detection.
  • The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.
  • The Google Cloud Storage location where the output will be written to.
  • The Google Cloud Storage location where the input will be read from.
  • Client image to perform Google Cloud Vision API tasks over.
  • If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
  • Image context and/or feature-specific parameters.
  • Stores image properties, such as dominant colors.
  • External image source (Google Cloud Storage or web URL image location).
  • The desired input location and metadata.
  • Rectangle determined by min and max LatLng pairs.
  • Detected entity location information.
  • A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
  • Contains metadata for the BatchAnnotateImages operation.
  • The desired output location and metadata.
  • Detected page from OCR.
  • Structural unit of text representing a number of words in certain order.
  • A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
  • A Property consists of a user-supplied name/value pair.
  • Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
  • A single symbol representation.
  • TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the [TextAnnotation.TextProperty][google.cloud.vision.v1p2beta1.TextAnnotation.TextProperty] message definition below for more detail.
  • Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
  • A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
  • Relevant information for the image from the Internet.
  • Parameters for web detection request.
  • A word representation.

Enums§

  • A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.