Struct google_api_proto::google::cloud::datalabeling::v1beta1::EvaluationJob
source · pub struct EvaluationJob {
pub name: String,
pub description: String,
pub state: i32,
pub schedule: String,
pub model_version: String,
pub evaluation_job_config: Option<EvaluationJobConfig>,
pub annotation_spec_set: String,
pub label_missing_ground_truth: bool,
pub attempts: Vec<Attempt>,
pub create_time: Option<Timestamp>,
}
Expand description
Defines an evaluation job that runs periodically to generate [Evaluations][google.cloud.datalabeling.v1beta1.Evaluation]. Creating an evaluation job is the starting point for using continuous evaluation.
Fields§
§name: String
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format:
“projects/{project_id}/evaluationJobs/{evaluation_job_id}”
description: String
Required. Description of the job. The description can be up to 25,000 characters long.
state: i32
Output only. Describes the current state of the job.
schedule: String
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days.
You can provide the schedule in crontab format or in an English-like format.
Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
model_version: String
Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format:
“projects/{project_id}/models/{model_name}/versions/{version_name}”
There can only be one evaluation job per model version.
evaluation_job_config: Option<EvaluationJobConfig>
Required. Configuration details for the evaluation job.
annotation_spec_set: String
Required. Name of the [AnnotationSpecSet][google.cloud.datalabeling.v1beta1.AnnotationSpecSet] describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format:
“projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}”
label_missing_ground_truth: bool
Required. Whether you want Data Labeling Service to provide ground truth
labels for prediction input. If you want the service to assign human
labelers to annotate your data, set this to true
. If you want to provide
your own ground truth labels in the evaluation job’s BigQuery table, set
this to false
.
attempts: Vec<Attempt>
Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
create_time: Option<Timestamp>
Output only. Timestamp of when this evaluation job was created.
Implementations§
Trait Implementations§
source§impl Clone for EvaluationJob
impl Clone for EvaluationJob
source§fn clone(&self) -> EvaluationJob
fn clone(&self) -> EvaluationJob
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for EvaluationJob
impl Debug for EvaluationJob
source§impl Default for EvaluationJob
impl Default for EvaluationJob
source§impl Message for EvaluationJob
impl Message for EvaluationJob
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for EvaluationJob
impl PartialEq for EvaluationJob
source§fn eq(&self, other: &EvaluationJob) -> bool
fn eq(&self, other: &EvaluationJob) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for EvaluationJob
Auto Trait Implementations§
impl Freeze for EvaluationJob
impl RefUnwindSafe for EvaluationJob
impl Send for EvaluationJob
impl Sync for EvaluationJob
impl Unpin for EvaluationJob
impl UnwindSafe for EvaluationJob
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request