Struct googapis::google::cloud::videointelligence::v1p3beta1::VideoAnnotationResults [−][src]
pub struct VideoAnnotationResults {Show 17 fields
pub input_uri: String,
pub segment: Option<VideoSegment>,
pub segment_label_annotations: Vec<LabelAnnotation>,
pub segment_presence_label_annotations: Vec<LabelAnnotation>,
pub shot_label_annotations: Vec<LabelAnnotation>,
pub shot_presence_label_annotations: Vec<LabelAnnotation>,
pub frame_label_annotations: Vec<LabelAnnotation>,
pub face_detection_annotations: Vec<FaceDetectionAnnotation>,
pub shot_annotations: Vec<VideoSegment>,
pub explicit_annotation: Option<ExplicitContentAnnotation>,
pub speech_transcriptions: Vec<SpeechTranscription>,
pub text_annotations: Vec<TextAnnotation>,
pub object_annotations: Vec<ObjectTrackingAnnotation>,
pub logo_recognition_annotations: Vec<LogoRecognitionAnnotation>,
pub person_detection_annotations: Vec<PersonDetectionAnnotation>,
pub celebrity_recognition_annotations: Option<CelebrityRecognitionAnnotation>,
pub error: Option<Status>,
}
Expand description
Annotation results for a single video.
Fields
input_uri: String
Video file location in Cloud Storage.
segment: Option<VideoSegment>
Video segment on which the annotation is run.
segment_label_annotations: Vec<LabelAnnotation>
Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label.
segment_presence_label_annotations: Vec<LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
There is exactly one element for each unique label. Compared to the
existing topical segment_label_annotations
, this field presents more
fine-grained, segment-level labels detected in video content and is made
available only when the client sets LabelDetectionConfig.model
to
“builtin/latest” in the request.
shot_label_annotations: Vec<LabelAnnotation>
Topical label annotations on shot level. There is exactly one element for each unique label.
shot_presence_label_annotations: Vec<LabelAnnotation>
Presence label annotations on shot level. There is exactly one element for
each unique label. Compared to the existing topical
shot_label_annotations
, this field presents more fine-grained, shot-level
labels detected in video content and is made available only when the client
sets LabelDetectionConfig.model
to “builtin/latest” in the request.
frame_label_annotations: Vec<LabelAnnotation>
Label annotations on frame level. There is exactly one element for each unique label.
face_detection_annotations: Vec<FaceDetectionAnnotation>
Face detection annotations.
shot_annotations: Vec<VideoSegment>
Shot annotations. Each shot is represented as a video segment.
explicit_annotation: Option<ExplicitContentAnnotation>
Explicit content annotation.
speech_transcriptions: Vec<SpeechTranscription>
Speech transcription.
text_annotations: Vec<TextAnnotation>
OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.
object_annotations: Vec<ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
logo_recognition_annotations: Vec<LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
person_detection_annotations: Vec<PersonDetectionAnnotation>
Person detection annotations.
celebrity_recognition_annotations: Option<CelebrityRecognitionAnnotation>
Celebrity recognition annotations.
error: Option<Status>
If set, indicates an error. Note that for a single AnnotateVideoRequest
some videos may succeed and some may fail.
Trait Implementations
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
Returns the encoded length of the message without a length delimiter.
Encodes the message to a buffer. Read more
Encodes the message to a newly allocated buffer.
Encodes the message with a length-delimiter to a buffer. Read more
Encodes the message with a length-delimiter to a newly allocated buffer.
Decodes an instance of the message from a buffer. Read more
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
Decodes a length-delimited instance of the message from the buffer.
Decodes an instance of the message from a buffer, and merges it into self
. Read more
Decodes a length-delimited instance of the message from buffer, and
merges it into self
. Read more
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl RefUnwindSafe for VideoAnnotationResults
impl Send for VideoAnnotationResults
impl Sync for VideoAnnotationResults
impl Unpin for VideoAnnotationResults
impl UnwindSafe for VideoAnnotationResults
Blanket Implementations
Mutably borrows from an owned value. Read more
Wrap the input message T
in a tonic::Request
pub fn vzip(self) -> V
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more