Struct google_api_proto::google::cloud::aiplatform::v1::ModelContainerSpec
source · pub struct ModelContainerSpec {
pub image_uri: String,
pub command: Vec<String>,
pub args: Vec<String>,
pub env: Vec<EnvVar>,
pub ports: Vec<Port>,
pub predict_route: String,
pub health_route: String,
pub grpc_ports: Vec<Port>,
pub deployment_timeout: Option<Duration>,
pub shared_memory_size_mb: i64,
pub startup_probe: Option<Probe>,
pub health_probe: Option<Probe>,
}
Expand description
Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.
Fields§
§image_uri: String
Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent.
The container image is ingested upon [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel], stored internally, and this original path is afterwards not used.
To learn about the requirements for the Docker image itself, see Custom container requirements.
You can use the URI to one of Vertex AI’s pre-built container images for prediction in this field.
command: Vec<String>
Immutable. Specifies the command that runs when the container starts. This
overrides the container’s
ENTRYPOINT.
Specify this field as an array of executable and arguments, similar to a
Docker ENTRYPOINT
’s “exec” form, not its “shell” form.
If you do not specify this field, then the container’s ENTRYPOINT
runs,
in conjunction with the
[args][google.cloud.aiplatform.v1.ModelContainerSpec.args] field or the
container’s CMD
,
if either exists. If this field is not specified and the container does not
have an ENTRYPOINT
, then refer to the Docker documentation about how
CMD
and ENTRYPOINT
interact.
If you specify this field, then you can also specify the args
field to
provide additional arguments for this command. However, if you specify this
field, then the container’s CMD
is ignored. See the
Kubernetes documentation about how the
command
and args
fields interact with a container’s ENTRYPOINT
and
CMD
.
In this field, you can reference environment variables set by Vertex
AI
and environment variables set in the
[env][google.cloud.aiplatform.v1.ModelContainerSpec.env] field. You cannot
reference environment variables set in the Docker image. In order for
environment variables to be expanded, reference them by using the following
syntax: $(VARIABLE_NAME)
Note that this differs
from Bash variable expansion, which does not use parentheses. If a variable
cannot be resolved, the reference in the input string is used unchanged. To
avoid variable expansion, you can escape this syntax with $$
; for
example: $$(VARIABLE_NAME)
This field corresponds
to the command
field of the Kubernetes Containers v1 core
API.
args: Vec<String>
Immutable. Specifies arguments for the command that runs when the container
starts. This overrides the container’s
CMD
. Specify
this field as an array of executable and arguments, similar to a Docker
CMD
’s “default parameters” form.
If you don’t specify this field but do specify the
[command][google.cloud.aiplatform.v1.ModelContainerSpec.command] field,
then the command from the command
field runs without any additional
arguments. See the Kubernetes documentation about how the command
and
args
fields interact with a container’s ENTRYPOINT
and
CMD
.
If you don’t specify this field and don’t specify the command
field,
then the container’s
ENTRYPOINT
and
CMD
determine what runs based on their default behavior. See the Docker
documentation about how CMD
and ENTRYPOINT
interact.
In this field, you can reference environment variables
set by Vertex
AI
and environment variables set in the
[env][google.cloud.aiplatform.v1.ModelContainerSpec.env] field. You cannot
reference environment variables set in the Docker image. In order for
environment variables to be expanded, reference them by using the following
syntax: $(VARIABLE_NAME)
Note that this differs
from Bash variable expansion, which does not use parentheses. If a variable
cannot be resolved, the reference in the input string is used unchanged. To
avoid variable expansion, you can escape this syntax with $$
; for
example: $$(VARIABLE_NAME)
This field corresponds
to the args
field of the Kubernetes Containers v1 core
API.
env: Vec<EnvVar>
Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables.
Additionally, the
[command][google.cloud.aiplatform.v1.ModelContainerSpec.command] and
[args][google.cloud.aiplatform.v1.ModelContainerSpec.args] fields can
reference these variables. Later entries in this list can also reference
earlier entries. For example, the following example sets the variable
VAR_2
to have the value foo bar
:
[
{
"name": "VAR_1",
"value": "foo"
},
{
"name": "VAR_2",
"value": "$(VAR_1) bar"
}
]
If you switch the order of the variables in the example, then the expansion does not occur.
This field corresponds to the env
field of the Kubernetes Containers
v1 core
API.
ports: Vec<Port>
Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.
If you do not specify this field, it defaults to following value:
[
{
"containerPort": 8080
}
]
Vertex AI does not use ports other than the first one listed. This field
corresponds to the ports
field of the Kubernetes Containers
v1 core
API.
predict_route: String
Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using [projects.locations.endpoints.predict][google.cloud.aiplatform.v1.PredictionService.Predict] to this path on the container’s IP address and port. Vertex AI then returns the container’s response in the API response.
For example, if you set this field to /foo
, then when Vertex AI
receives a prediction request, it forwards the request body in a POST
request to the /foo
path on the port of your container specified by the
first value of this ModelContainerSpec
’s
[ports][google.cloud.aiplatform.v1.ModelContainerSpec.ports] field.
If you don’t specify this field, it defaults to the following value when
you [deploy this Model to an
Endpoint][google.cloud.aiplatform.v1.EndpointService.DeployModel]:
/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict
The placeholders in this value are replaced as follows:
-
ENDPOINT: The last segment (following
endpoints/
)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as theAIP_ENDPOINT_ID
environment variable.) -
DEPLOYED_MODEL: [DeployedModel.id][google.cloud.aiplatform.v1.DeployedModel.id] of the
DeployedModel
. (Vertex AI makes this value available to your container code as theAIP_DEPLOYED_MODEL_ID
environment variable.)
health_route: String
Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container’s IP address and port to check that the container is healthy. Read more about health checks.
For example, if you set this field to /bar
, then Vertex AI
intermittently sends a GET request to the /bar
path on the port of your
container specified by the first value of this ModelContainerSpec
’s
[ports][google.cloud.aiplatform.v1.ModelContainerSpec.ports] field.
If you don’t specify this field, it defaults to the following value when
you [deploy this Model to an
Endpoint][google.cloud.aiplatform.v1.EndpointService.DeployModel]:
/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict
The placeholders in this value are replaced as follows:
-
ENDPOINT: The last segment (following
endpoints/
)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as theAIP_ENDPOINT_ID
environment variable.) -
DEPLOYED_MODEL: [DeployedModel.id][google.cloud.aiplatform.v1.DeployedModel.id] of the
DeployedModel
. (Vertex AI makes this value available to your container code as theAIP_DEPLOYED_MODEL_ID
environment variable.)
grpc_ports: Vec<Port>
Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.
If you do not specify this field, gRPC requests to the container will be disabled.
Vertex AI does not use ports other than the first one listed. This field
corresponds to the ports
field of the Kubernetes Containers v1 core API.
deployment_timeout: Option<Duration>
Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.
Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes.
startup_probe: Option<Probe>
Immutable. Specification for Kubernetes startup probe.
health_probe: Option<Probe>
Immutable. Specification for Kubernetes readiness probe.
Trait Implementations§
source§impl Clone for ModelContainerSpec
impl Clone for ModelContainerSpec
source§fn clone(&self) -> ModelContainerSpec
fn clone(&self) -> ModelContainerSpec
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ModelContainerSpec
impl Debug for ModelContainerSpec
source§impl Default for ModelContainerSpec
impl Default for ModelContainerSpec
source§impl Message for ModelContainerSpec
impl Message for ModelContainerSpec
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for ModelContainerSpec
impl PartialEq for ModelContainerSpec
source§fn eq(&self, other: &ModelContainerSpec) -> bool
fn eq(&self, other: &ModelContainerSpec) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for ModelContainerSpec
Auto Trait Implementations§
impl Freeze for ModelContainerSpec
impl RefUnwindSafe for ModelContainerSpec
impl Send for ModelContainerSpec
impl Sync for ModelContainerSpec
impl Unpin for ModelContainerSpec
impl UnwindSafe for ModelContainerSpec
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request