Struct google_api_proto::google::cloud::aiplatform::v1::ModelDeploymentMonitoringJob
source · pub struct ModelDeploymentMonitoringJob {Show 25 fields
pub name: String,
pub display_name: String,
pub endpoint: String,
pub state: i32,
pub schedule_state: i32,
pub latest_monitoring_pipeline_metadata: Option<LatestMonitoringPipelineMetadata>,
pub model_deployment_monitoring_objective_configs: Vec<ModelDeploymentMonitoringObjectiveConfig>,
pub model_deployment_monitoring_schedule_config: Option<ModelDeploymentMonitoringScheduleConfig>,
pub logging_sampling_strategy: Option<SamplingStrategy>,
pub model_monitoring_alert_config: Option<ModelMonitoringAlertConfig>,
pub predict_instance_schema_uri: String,
pub sample_predict_instance: Option<Value>,
pub analysis_instance_schema_uri: String,
pub bigquery_tables: Vec<ModelDeploymentMonitoringBigQueryTable>,
pub log_ttl: Option<Duration>,
pub labels: BTreeMap<String, String>,
pub create_time: Option<Timestamp>,
pub update_time: Option<Timestamp>,
pub next_schedule_time: Option<Timestamp>,
pub stats_anomalies_base_directory: Option<GcsDestination>,
pub encryption_spec: Option<EncryptionSpec>,
pub enable_monitoring_pipeline_logs: bool,
pub error: Option<Status>,
pub satisfies_pzs: bool,
pub satisfies_pzi: bool,
}
Expand description
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
Fields§
§name: String
Output only. Resource name of a ModelDeploymentMonitoringJob.
display_name: String
Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
endpoint: String
Required. Endpoint resource name.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
state: i32
Output only. The detailed state of the monitoring job. When the job is still creating, the state will be ‘PENDING’. Once the job is successfully created, the state will be ‘RUNNING’. Pause the job, the state will be ‘PAUSED’. Resume the job, the state will return to ‘RUNNING’.
schedule_state: i32
Output only. Schedule state when the monitoring job is in Running state.
latest_monitoring_pipeline_metadata: Option<LatestMonitoringPipelineMetadata>
Output only. Latest triggered monitoring pipeline metadata.
model_deployment_monitoring_objective_configs: Vec<ModelDeploymentMonitoringObjectiveConfig>
Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
model_deployment_monitoring_schedule_config: Option<ModelDeploymentMonitoringScheduleConfig>
Required. Schedule config for running the monitoring job.
logging_sampling_strategy: Option<SamplingStrategy>
Required. Sample Strategy for logging.
model_monitoring_alert_config: Option<ModelMonitoringAlertConfig>
Alert config for model monitoring.
predict_instance_schema_uri: String
YAML schema file uri describing the format of a single instance, which are given to format this Endpoint’s prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
sample_predict_instance: Option<Value>
Sample Predict instance, same format as [PredictRequest.instances][google.cloud.aiplatform.v1.PredictRequest.instances], this can be set as a replacement of [ModelDeploymentMonitoringJob.predict_instance_schema_uri][google.cloud.aiplatform.v1.ModelDeploymentMonitoringJob.predict_instance_schema_uri]. If not set, we will generate predict schema from collected predict requests.
analysis_instance_schema_uri: String
YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from [predict_instance_schema_uri][google.cloud.aiplatform.v1.ModelDeploymentMonitoringJob.predict_instance_schema_uri], meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
bigquery_tables: Vec<ModelDeploymentMonitoringBigQueryTable>
Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum:
- Training data logging predict request/response
- Serving data logging predict request/response
log_ttl: Option<Duration>
The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
labels: BTreeMap<String, String>
The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
create_time: Option<Timestamp>
Output only. Timestamp when this ModelDeploymentMonitoringJob was created.
update_time: Option<Timestamp>
Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
next_schedule_time: Option<Timestamp>
Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
stats_anomalies_base_directory: Option<GcsDestination>
Stats anomalies base folder path.
encryption_spec: Option<EncryptionSpec>
Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
enable_monitoring_pipeline_logs: bool
If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
error: Option<Status>
Output only. Only populated when the job’s state is JOB_STATE_FAILED
or
JOB_STATE_CANCELLED
.
satisfies_pzs: bool
Output only. Reserved for future use.
satisfies_pzi: bool
Output only. Reserved for future use.
Implementations§
source§impl ModelDeploymentMonitoringJob
impl ModelDeploymentMonitoringJob
sourcepub fn state(&self) -> JobState
pub fn state(&self) -> JobState
Returns the enum value of state
, or the default if the field is set to an invalid enum value.
sourcepub fn schedule_state(&self) -> MonitoringScheduleState
pub fn schedule_state(&self) -> MonitoringScheduleState
Returns the enum value of schedule_state
, or the default if the field is set to an invalid enum value.
sourcepub fn set_schedule_state(&mut self, value: MonitoringScheduleState)
pub fn set_schedule_state(&mut self, value: MonitoringScheduleState)
Sets schedule_state
to the provided enum value.
Trait Implementations§
source§impl Clone for ModelDeploymentMonitoringJob
impl Clone for ModelDeploymentMonitoringJob
source§fn clone(&self) -> ModelDeploymentMonitoringJob
fn clone(&self) -> ModelDeploymentMonitoringJob
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ModelDeploymentMonitoringJob
impl Debug for ModelDeploymentMonitoringJob
source§impl Message for ModelDeploymentMonitoringJob
impl Message for ModelDeploymentMonitoringJob
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for ModelDeploymentMonitoringJob
impl PartialEq for ModelDeploymentMonitoringJob
source§fn eq(&self, other: &ModelDeploymentMonitoringJob) -> bool
fn eq(&self, other: &ModelDeploymentMonitoringJob) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for ModelDeploymentMonitoringJob
Auto Trait Implementations§
impl Freeze for ModelDeploymentMonitoringJob
impl RefUnwindSafe for ModelDeploymentMonitoringJob
impl Send for ModelDeploymentMonitoringJob
impl Sync for ModelDeploymentMonitoringJob
impl Unpin for ModelDeploymentMonitoringJob
impl UnwindSafe for ModelDeploymentMonitoringJob
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request