Struct google_api_proto::google::cloud::datalabeling::v1beta1::EvaluationJobConfig
source · pub struct EvaluationJobConfig {
pub input_config: Option<InputConfig>,
pub evaluation_config: Option<EvaluationConfig>,
pub human_annotation_config: Option<HumanAnnotationConfig>,
pub bigquery_import_keys: BTreeMap<String, String>,
pub example_count: i32,
pub example_sample_percentage: f64,
pub evaluation_job_alert_config: Option<EvaluationJobAlertConfig>,
pub human_annotation_request_config: Option<HumanAnnotationRequestConfig>,
}
Expand description
Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob.
Fields§
§input_config: Option<InputConfig>
Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields:
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
.annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection).- If your machine learning model performs classification, you must specify
classificationMetadata.isMultiLabel
. - You must specify
bigquerySource
(notgcsSource
).
evaluation_config: Option<EvaluationConfig>
Required. Details for calculating evaluation metrics and creating
[Evaulations][google.cloud.datalabeling.v1beta1.Evaluation]. If your model version performs image object
detection, you must specify the boundingBoxEvaluationOptions
field within
this configuration. Otherwise, provide an empty object for this
configuration.
human_annotation_config: Option<HumanAnnotationConfig>
Optional. Details for human annotation of your data. If you set
[labelMissingGroundTruth][google.cloud.datalabeling.v1beta1.EvaluationJob.label_missing_ground_truth] to
true
for this evaluation job, then you must specify this field. If you
plan to provide your own ground truth labels, then omit this field.
Note that you must create an [Instruction][google.cloud.datalabeling.v1beta1.Instruction] resource before you can
specify this field. Provide the name of the instruction resource in the
instruction
field within this configuration.
bigquery_import_keys: BTreeMap<String, String>
Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON.
You can provide the following entries in this field:
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
.reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
.label_json_key
: the label key for prediction output. Required.label_score_json_key
: the score key for prediction output. Required.bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection.
example_count: i32
Required. The maximum number of predictions to sample and save to BigQuery
during each [evaluation interval][google.cloud.datalabeling.v1beta1.EvaluationJob.schedule]. This limit
overrides example_sample_percentage
: even if the service has not sampled
enough predictions to fulfill example_sample_perecentage
during an
interval, it stops sampling predictions when it meets this limit.
example_sample_percentage: f64
Required. Fraction of predictions to sample and save to BigQuery during each [evaluation interval][google.cloud.datalabeling.v1beta1.EvaluationJob.schedule]. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
evaluation_job_alert_config: Option<EvaluationJobAlertConfig>
Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
human_annotation_request_config: Option<HumanAnnotationRequestConfig>
Required. Details for how you want human reviewers to provide ground truth labels.
Trait Implementations§
source§impl Clone for EvaluationJobConfig
impl Clone for EvaluationJobConfig
source§fn clone(&self) -> EvaluationJobConfig
fn clone(&self) -> EvaluationJobConfig
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for EvaluationJobConfig
impl Debug for EvaluationJobConfig
source§impl Default for EvaluationJobConfig
impl Default for EvaluationJobConfig
source§impl Message for EvaluationJobConfig
impl Message for EvaluationJobConfig
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for EvaluationJobConfig
impl PartialEq for EvaluationJobConfig
source§fn eq(&self, other: &EvaluationJobConfig) -> bool
fn eq(&self, other: &EvaluationJobConfig) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for EvaluationJobConfig
Auto Trait Implementations§
impl Freeze for EvaluationJobConfig
impl RefUnwindSafe for EvaluationJobConfig
impl Send for EvaluationJobConfig
impl Sync for EvaluationJobConfig
impl Unpin for EvaluationJobConfig
impl UnwindSafe for EvaluationJobConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request