Struct google_api_proto::google::cloud::aiplatform::v1::explanation_metadata::InputMetadata
source · pub struct InputMetadata {
pub input_baselines: Vec<Value>,
pub input_tensor_name: String,
pub encoding: i32,
pub modality: String,
pub feature_value_domain: Option<FeatureValueDomain>,
pub indices_tensor_name: String,
pub dense_shape_tensor_name: String,
pub index_feature_mapping: Vec<String>,
pub encoded_tensor_name: String,
pub encoded_baselines: Vec<Value>,
pub visualization: Option<Visualization>,
pub group_name: String,
}
Expand description
Metadata of the input of a feature.
Fields other than [InputMetadata.input_baselines][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.input_baselines] are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
Fields§
§input_baselines: Vec<Value>
Baseline inputs for this feature.
If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions].
For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor.
For custom images, the element of the baselines must be in the same format as the feature’s input in the [instance][google.cloud.aiplatform.v1.ExplainRequest.instances][]. The schema of any single instance may be specified via Endpoint’s DeployedModels’ [Model’s][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata’s][google.cloud.aiplatform.v1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1.PredictSchemata.instance_schema_uri].
input_tensor_name: String
Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.
encoding: i32
Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.
modality: String
Modality of the feature. Valid values are: numeric, image. Defaults to numeric.
feature_value_domain: Option<FeatureValueDomain>
The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.
indices_tensor_name: String
Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
dense_shape_tensor_name: String
Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.
index_feature_mapping: Vec<String>
A list of feature names for each index in the input tensor. Required when the input [InputMetadata.encoding][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.encoding] is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.
encoded_tensor_name: String
Encoded tensor is a transformation of the input tensor. Must be provided if choosing [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution] or [XRAI attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution] and the input tensor is not differentiable.
An encoded tensor is generated if the input tensor is encoded by a lookup table.
encoded_baselines: Vec<Value>
A list of baselines for the encoded tensor.
The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.
visualization: Option<Visualization>
Visualization configurations for image explanation.
group_name: String
Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in [Attribution.feature_attributions][google.cloud.aiplatform.v1.Attribution.feature_attributions], keyed by the group name.
Implementations§
source§impl InputMetadata
impl InputMetadata
Trait Implementations§
source§impl Clone for InputMetadata
impl Clone for InputMetadata
source§fn clone(&self) -> InputMetadata
fn clone(&self) -> InputMetadata
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for InputMetadata
impl Debug for InputMetadata
source§impl Default for InputMetadata
impl Default for InputMetadata
source§impl Message for InputMetadata
impl Message for InputMetadata
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for InputMetadata
impl PartialEq for InputMetadata
source§fn eq(&self, other: &InputMetadata) -> bool
fn eq(&self, other: &InputMetadata) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for InputMetadata
Auto Trait Implementations§
impl Freeze for InputMetadata
impl RefUnwindSafe for InputMetadata
impl Send for InputMetadata
impl Sync for InputMetadata
impl Unpin for InputMetadata
impl UnwindSafe for InputMetadata
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request