pub struct ModelContainerSpec {
    pub image_uri: String,
    pub command: Vec<String>,
    pub args: Vec<String>,
    pub env: Vec<EnvVar>,
    pub ports: Vec<Port>,
    pub predict_route: String,
    pub health_route: String,
    pub grpc_ports: Vec<Port>,
    pub deployment_timeout: Option<Duration>,
    pub shared_memory_size_mb: i64,
    pub startup_probe: Option<Probe>,
    pub health_probe: Option<Probe>,
}
Expand description

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.

Fields§

§image_uri: String

Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent.

The container image is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1beta1.ModelService.UploadModel], stored internally, and this original path is afterwards not used.

To learn about the requirements for the Docker image itself, see Custom container requirements.

You can use the URI to one of Vertex AI’s pre-built container images for prediction in this field.

§command: Vec<String>

Immutable. Specifies the command that runs when the container starts. This overrides the container’s ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker ENTRYPOINT’s “exec” form, not its “shell” form.

If you do not specify this field, then the container’s ENTRYPOINT runs, in conjunction with the [args][google.cloud.aiplatform.v1beta1.ModelContainerSpec.args] field or the container’s CMD, if either exists. If this field is not specified and the container does not have an ENTRYPOINT, then refer to the Docker documentation about how CMD and ENTRYPOINT interact.

If you specify this field, then you can also specify the args field to provide additional arguments for this command. However, if you specify this field, then the container’s CMD is ignored. See the Kubernetes documentation about how the command and args fields interact with a container’s ENTRYPOINT and CMD.

In this field, you can reference environment variables set by Vertex AI and environment variables set in the [env][google.cloud.aiplatform.v1beta1.ModelContainerSpec.env] field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the command field of the Kubernetes Containers v1 core API.

§args: Vec<String>

Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container’s CMD. Specify this field as an array of executable and arguments, similar to a Docker CMD’s “default parameters” form.

If you don’t specify this field but do specify the [command][google.cloud.aiplatform.v1beta1.ModelContainerSpec.command] field, then the command from the command field runs without any additional arguments. See the Kubernetes documentation about how the command and args fields interact with a container’s ENTRYPOINT and CMD.

If you don’t specify this field and don’t specify the command field, then the container’s ENTRYPOINT and CMD determine what runs based on their default behavior. See the Docker documentation about how CMD and ENTRYPOINT interact.

In this field, you can reference environment variables set by Vertex AI and environment variables set in the [env][google.cloud.aiplatform.v1beta1.ModelContainerSpec.env] field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the args field of the Kubernetes Containers v1 core API.

§env: Vec<EnvVar>

Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables.

Additionally, the [command][google.cloud.aiplatform.v1beta1.ModelContainerSpec.command] and [args][google.cloud.aiplatform.v1beta1.ModelContainerSpec.args] fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable VAR_2 to have the value foo bar:

[
   {
     "name": "VAR_1",
     "value": "foo"
   },
   {
     "name": "VAR_2",
     "value": "$(VAR_1) bar"
   }
]

If you switch the order of the variables in the example, then the expansion does not occur.

This field corresponds to the env field of the Kubernetes Containers v1 core API.

§ports: Vec<Port>

Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.

If you do not specify this field, it defaults to following value:

[
   {
     "containerPort": 8080
   }
]

Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

§predict_route: String

Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using [projects.locations.endpoints.predict][google.cloud.aiplatform.v1beta1.PredictionService.Predict] to this path on the container’s IP address and port. Vertex AI then returns the container’s response in the API response.

For example, if you set this field to /foo, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the /foo path on the port of your container specified by the first value of this ModelContainerSpec’s [ports][google.cloud.aiplatform.v1beta1.ModelContainerSpec.ports] field.

If you don’t specify this field, it defaults to the following value when you [deploy this Model to an Endpoint][google.cloud.aiplatform.v1beta1.EndpointService.DeployModel]: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows:

  • ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.)

  • DEPLOYED_MODEL: [DeployedModel.id][google.cloud.aiplatform.v1beta1.DeployedModel.id] of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.)

§health_route: String

Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container’s IP address and port to check that the container is healthy. Read more about health checks.

For example, if you set this field to /bar, then Vertex AI intermittently sends a GET request to the /bar path on the port of your container specified by the first value of this ModelContainerSpec’s [ports][google.cloud.aiplatform.v1beta1.ModelContainerSpec.ports] field.

If you don’t specify this field, it defaults to the following value when you [deploy this Model to an Endpoint][google.cloud.aiplatform.v1beta1.EndpointService.DeployModel]: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows:

  • ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.)

  • DEPLOYED_MODEL: [DeployedModel.id][google.cloud.aiplatform.v1beta1.DeployedModel.id] of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.)

§grpc_ports: Vec<Port>

Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.

If you do not specify this field, gRPC requests to the container will be disabled.

Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

§deployment_timeout: Option<Duration>

Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.

§shared_memory_size_mb: i64

Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes.

§startup_probe: Option<Probe>

Immutable. Specification for Kubernetes startup probe.

§health_probe: Option<Probe>

Immutable. Specification for Kubernetes readiness probe.

Trait Implementations§

source§

impl Clone for ModelContainerSpec

source§

fn clone(&self) -> ModelContainerSpec

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for ModelContainerSpec

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for ModelContainerSpec

source§

fn default() -> Self

Returns the “default value” for a type. Read more
source§

impl Message for ModelContainerSpec

source§

fn encoded_len(&self) -> usize

Returns the encoded length of the message without a length delimiter.
source§

fn clear(&mut self)

Clears the message, resetting all fields to their default.
source§

fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError>
where B: BufMut, Self: Sized,

Encodes the message to a buffer. Read more
source§

fn encode_to_vec(&self) -> Vec<u8>
where Self: Sized,

Encodes the message to a newly allocated buffer.
source§

fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError>
where B: BufMut, Self: Sized,

Encodes the message with a length-delimiter to a buffer. Read more
source§

fn encode_length_delimited_to_vec(&self) -> Vec<u8>
where Self: Sized,

Encodes the message with a length-delimiter to a newly allocated buffer.
source§

fn decode<B>(buf: B) -> Result<Self, DecodeError>
where B: Buf, Self: Default,

Decodes an instance of the message from a buffer. Read more
source§

fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError>
where B: Buf, Self: Default,

Decodes a length-delimited instance of the message from the buffer.
source§

fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError>
where B: Buf, Self: Sized,

Decodes an instance of the message from a buffer, and merges it into self. Read more
source§

fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError>
where B: Buf, Self: Sized,

Decodes a length-delimited instance of the message from buffer, and merges it into self.
source§

impl PartialEq for ModelContainerSpec

source§

fn eq(&self, other: &ModelContainerSpec) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for ModelContainerSpec

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> FromRef<T> for T
where T: Clone,

§

fn from_ref(input: &T) -> T

Converts to this type from a reference to the input type.
§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoRequest<T> for T

source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic::Request
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more