Modules§

Structs§

  • Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.
  • Response to an image annotation request.
  • Multiple image annotation requests are batched into a single service call.
  • Response to a batch image annotation request.
  • Logical element on the page.
  • A bounding polygon for the detected image annotation.
  • Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
  • Single crop hint that is used to generate a new crop when serving an image.
  • Set of crop hints that are used to generate new crops when serving images.
  • Parameters for crop hints annotation request.
  • Set of dominant colors and their corresponding scores.
  • Set of detected entity features.
  • A face annotation object contains the results of face detection.
  • Users describe the type of Google Cloud Vision API tasks to perform over images by using Features. Each Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API vertical to operate on and the number of top-scoring results to return.
  • Client image to perform Google Cloud Vision API tasks over.
  • Image context and/or feature-specific parameters.
  • Stores image properties, such as dominant colors.
  • External image source (Google Cloud Storage image location).
  • Rectangle determined by min and max LatLng pairs.
  • Detected entity location information.
  • Detected page from OCR.
  • Structural unit of text representing a number of words in certain order.
  • A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
  • A Property consists of a user-supplied name/value pair.
  • Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
  • A single symbol representation.
  • TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the [TextAnnotation.TextProperty][google.cloud.vision.v1p1beta1.TextAnnotation.TextProperty] message definition below for more detail.
  • Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
  • A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
  • Relevant information for the image from the Internet.
  • Parameters for web detection request.
  • A word representation.

Enums§

  • A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.