Struct google_api_proto::google::cloud::automl::v1beta1::ModelEvaluation
source · pub struct ModelEvaluation {
pub name: String,
pub annotation_spec_id: String,
pub display_name: String,
pub create_time: Option<Timestamp>,
pub evaluated_example_count: i32,
pub metrics: Option<Metrics>,
}
Expand description
Evaluation results of a model.
Fields§
§name: String
Output only. Resource name of the model evaluation. Format:
projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
annotation_spec_id: String
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] the [display_name][google.cloud.automl.v1beta1.ModelEvaluation.display_name] field is used.
display_name: String
Output only. The value of [display_name][google.cloud.automl.v1beta1.AnnotationSpec.display_name] at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model’s trainings. For Tables CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
create_time: Option<Timestamp>
Output only. Timestamp when this model evaluation was created.
evaluated_example_count: i32
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the
[annotation_spec_id][google.cloud.automl.v1beta1.ModelEvaluation.annotation_spec_id].
metrics: Option<Metrics>
Output only. Problem type specific evaluation metrics.
Trait Implementations§
source§impl Clone for ModelEvaluation
impl Clone for ModelEvaluation
source§fn clone(&self) -> ModelEvaluation
fn clone(&self) -> ModelEvaluation
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ModelEvaluation
impl Debug for ModelEvaluation
source§impl Default for ModelEvaluation
impl Default for ModelEvaluation
source§impl Message for ModelEvaluation
impl Message for ModelEvaluation
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for ModelEvaluation
impl PartialEq for ModelEvaluation
source§fn eq(&self, other: &ModelEvaluation) -> bool
fn eq(&self, other: &ModelEvaluation) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for ModelEvaluation
Auto Trait Implementations§
impl Freeze for ModelEvaluation
impl RefUnwindSafe for ModelEvaluation
impl Send for ModelEvaluation
impl Sync for ModelEvaluation
impl Unpin for ModelEvaluation
impl UnwindSafe for ModelEvaluation
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request