Struct google_api_proto::google::cloud::asset::v1::BigQueryDestination
source · pub struct BigQueryDestination {
pub dataset: String,
pub table: String,
pub force: bool,
pub partition_spec: Option<PartitionSpec>,
pub separate_tables_per_asset_type: bool,
}
Expand description
A BigQuery destination for exporting assets to.
Fields§
§dataset: String
Required. The BigQuery dataset in format
“projects/projectId/datasets/datasetId”, to which the snapshot result
should be exported. If this dataset does not exist, the export call returns
an INVALID_ARGUMENT error. Setting the contentType
for exportAssets
determines the
schema
of the BigQuery table. Setting separateTablesPerAssetType
to TRUE
also
influences the schema.
table: String
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
force: bool
If the destination table already exists and this flag is TRUE
, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE
or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
partition_spec: Option<PartitionSpec>
[partition_spec] determines whether to export to partitioned table(s) and how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table’s schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
separate_tables_per_asset_type: bool
If this flag is TRUE
, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and [partition_spec] fields will apply to each of them.
Field [table] will be concatenated with “” and the asset type names (see https://cloud.google.com/asset-inventory/docs/supported-asset-types for supported asset types) to construct per-asset-type table names, in which all non-alphanumeric characters like “.” and “/” will be substituted by “”. Example: if field [table] is “mytable” and snapshot results contain “storage.googleapis.com/Bucket” assets, the corresponding table name will be “mytable_storage_googleapis_com_Bucket”. If any of these tables does not exist, a new table with the concatenated name will be created.
When [content_type] in the ExportAssetsRequest is RESOURCE
, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will return an error but the export results that already succeed will persist. Example: if exporting to table_type_A succeeds when exporting to table_type_B fails during one export call, the results in table_type_A will persist and there will not be partial results persisting in a table.
Trait Implementations§
source§impl Clone for BigQueryDestination
impl Clone for BigQueryDestination
source§fn clone(&self) -> BigQueryDestination
fn clone(&self) -> BigQueryDestination
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for BigQueryDestination
impl Debug for BigQueryDestination
source§impl Default for BigQueryDestination
impl Default for BigQueryDestination
source§impl Message for BigQueryDestination
impl Message for BigQueryDestination
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for BigQueryDestination
impl PartialEq for BigQueryDestination
source§fn eq(&self, other: &BigQueryDestination) -> bool
fn eq(&self, other: &BigQueryDestination) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for BigQueryDestination
Auto Trait Implementations§
impl Freeze for BigQueryDestination
impl RefUnwindSafe for BigQueryDestination
impl Send for BigQueryDestination
impl Sync for BigQueryDestination
impl Unpin for BigQueryDestination
impl UnwindSafe for BigQueryDestination
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request