Struct googapis::google::cloud::aiplatform::v1::schema::trainingjob::definition::AutoMlImageClassificationInputs [−][src]
pub struct AutoMlImageClassificationInputs {
pub model_type: i32,
pub base_model_id: String,
pub budget_milli_node_hours: i64,
pub disable_early_stopping: bool,
pub multi_label: bool,
}
Fields
model_type: i32
base_model_id: String
The ID of the base
model. If it is specified, the new model will be
trained based on the base
model. Otherwise, the new model will be
trained from scratch. The base
model must be in the same
Project and Location as the new Model to train, and have the same
modelType.
budget_milli_node_hours: i64
The training budget of creating this model, expressed in milli node
hours i.e. 1,000 value in this field means 1 node hour. The actual
metadata.costMilliNodeHours will be equal or less than this value.
If further model training ceases to provide any improvements, it will
stop without using the full budget and the metadata.successfulStopReason
will be model-converged
.
Note, node_hour = actual_hour * number_of_nodes_involved.
For modelType cloud
(default), the budget must be between 8,000
and 800,000 milli node hours, inclusive. The default value is 192,000
which represents one day in wall time, considering 8 nodes are used.
For model types mobile-tf-low-latency-1
, mobile-tf-versatile-1
,
mobile-tf-high-accuracy-1
, the training budget must be between
1,000 and 100,000 milli node hours, inclusive.
The default value is 24,000 which represents one day in wall time on a
single node that is used.
disable_early_stopping: bool
Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Classification might stop training before the entire training budget has been used.
multi_label: bool
If false, a single-label (multi-class) Model will be trained (i.e. assuming that for each image just up to one annotation may be applicable). If true, a multi-label Model will be trained (i.e. assuming that for each image multiple annotations may be applicable).
Implementations
Returns the enum value of model_type
, or the default if the field is set to an invalid enum value.
Sets model_type
to the provided enum value.
Trait Implementations
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
Returns the encoded length of the message without a length delimiter.
Encodes the message to a buffer. Read more
Encodes the message to a newly allocated buffer.
Encodes the message with a length-delimiter to a buffer. Read more
Encodes the message with a length-delimiter to a newly allocated buffer.
Decodes an instance of the message from a buffer. Read more
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
Self: Default,
B: Buf,
Decodes a length-delimited instance of the message from the buffer.
Decodes an instance of the message from a buffer, and merges it into self
. Read more
Decodes a length-delimited instance of the message from buffer, and
merges it into self
. Read more
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
This method tests for !=
.
Auto Trait Implementations
impl Send for AutoMlImageClassificationInputs
impl Sync for AutoMlImageClassificationInputs
impl Unpin for AutoMlImageClassificationInputs
Blanket Implementations
Mutably borrows from an owned value. Read more
Wrap the input message T
in a tonic::Request
pub fn vzip(self) -> V
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more