Struct googapis::google::cloud::aiplatform::v1::ModelDeploymentMonitoringJob [−][src]
pub struct ModelDeploymentMonitoringJob {Show 22 fields
pub name: String,
pub display_name: String,
pub endpoint: String,
pub state: i32,
pub schedule_state: i32,
pub model_deployment_monitoring_objective_configs: Vec<ModelDeploymentMonitoringObjectiveConfig>,
pub model_deployment_monitoring_schedule_config: Option<ModelDeploymentMonitoringScheduleConfig>,
pub logging_sampling_strategy: Option<SamplingStrategy>,
pub model_monitoring_alert_config: Option<ModelMonitoringAlertConfig>,
pub predict_instance_schema_uri: String,
pub sample_predict_instance: Option<Value>,
pub analysis_instance_schema_uri: String,
pub bigquery_tables: Vec<ModelDeploymentMonitoringBigQueryTable>,
pub log_ttl: Option<Duration>,
pub labels: HashMap<String, String>,
pub create_time: Option<Timestamp>,
pub update_time: Option<Timestamp>,
pub next_schedule_time: Option<Timestamp>,
pub stats_anomalies_base_directory: Option<GcsDestination>,
pub encryption_spec: Option<EncryptionSpec>,
pub enable_monitoring_pipeline_logs: bool,
pub error: Option<Status>,
}
Expand description
Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
Fields
name: String
Output only. Resource name of a ModelDeploymentMonitoringJob.
display_name: String
Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
endpoint: String
Required. Endpoint resource name.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
state: i32
Output only. The detailed state of the monitoring job. When the job is still creating, the state will be ‘PENDING’. Once the job is successfully created, the state will be ‘RUNNING’. Pause the job, the state will be ‘PAUSED’. Resume the job, the state will return to ‘RUNNING’.
schedule_state: i32
Output only. Schedule state when the monitoring job is in Running state.
model_deployment_monitoring_objective_configs: Vec<ModelDeploymentMonitoringObjectiveConfig>
Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
model_deployment_monitoring_schedule_config: Option<ModelDeploymentMonitoringScheduleConfig>
Required. Schedule config for running the monitoring job.
logging_sampling_strategy: Option<SamplingStrategy>
Required. Sample Strategy for logging.
model_monitoring_alert_config: Option<ModelMonitoringAlertConfig>
Alert config for model monitoring.
predict_instance_schema_uri: String
YAML schema file uri describing the format of a single instance, which are given to format this Endpoint’s prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
sample_predict_instance: Option<Value>
Sample Predict instance, same format as [PredictRequest.instances][google.cloud.aiplatform.v1.PredictRequest.instances], this can be set as a replacement of [ModelDeploymentMonitoringJob.predict_instance_schema_uri][google.cloud.aiplatform.v1.ModelDeploymentMonitoringJob.predict_instance_schema_uri]. If not set, we will generate predict schema from collected predict requests.
analysis_instance_schema_uri: String
YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.
If this field is empty, all the feature data types are inferred from [predict_instance_schema_uri][google.cloud.aiplatform.v1.ModelDeploymentMonitoringJob.predict_instance_schema_uri], meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
bigquery_tables: Vec<ModelDeploymentMonitoringBigQueryTable>
Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum:
- Training data logging predict request/response
- Serving data logging predict request/response
log_ttl: Option<Duration>
The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
labels: HashMap<String, String>
The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
create_time: Option<Timestamp>
Output only. Timestamp when this ModelDeploymentMonitoringJob was created.
update_time: Option<Timestamp>
Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
next_schedule_time: Option<Timestamp>
Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
stats_anomalies_base_directory: Option<GcsDestination>
Stats anomalies base folder path.
encryption_spec: Option<EncryptionSpec>
Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
enable_monitoring_pipeline_logs: bool
If true, the scheduled monitoring pipeline status logs are sent to Google Cloud Logging. Please note the logs incur cost, which are subject to Cloud Logging pricing.
error: Option<Status>
Output only. Only populated when the job’s state is JOB_STATE_FAILED
or
JOB_STATE_CANCELLED
.
Implementations
Returns the enum value of state
, or the default if the field is set to an invalid enum value.
Returns the enum value of schedule_state
, or the default if the field is set to an invalid enum value.
Sets schedule_state
to the provided enum value.
Trait Implementations
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
Returns the encoded length of the message without a length delimiter.
Encodes the message to a buffer. Read more
Encodes the message to a newly allocated buffer.
Encodes the message with a length-delimiter to a buffer. Read more
Encodes the message with a length-delimiter to a newly allocated buffer.
Decodes an instance of the message from a buffer. Read more
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
Decodes a length-delimited instance of the message from the buffer.
Decodes an instance of the message from a buffer, and merges it into self
. Read more
Decodes a length-delimited instance of the message from buffer, and
merges it into self
. Read more
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl Send for ModelDeploymentMonitoringJob
impl Sync for ModelDeploymentMonitoringJob
impl Unpin for ModelDeploymentMonitoringJob
impl UnwindSafe for ModelDeploymentMonitoringJob
Blanket Implementations
Mutably borrows from an owned value. Read more
Wrap the input message T
in a tonic::Request
pub fn vzip(self) -> V
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more