Struct google_api_proto::google::cloud::discoveryengine::v1alpha::QualityMetrics
source · pub struct QualityMetrics {
pub doc_recall: Option<TopkMetrics>,
pub doc_precision: Option<TopkMetrics>,
pub doc_ndcg: Option<TopkMetrics>,
pub page_recall: Option<TopkMetrics>,
pub page_ndcg: Option<TopkMetrics>,
}
Expand description
Describes the metrics produced by the evaluation.
Fields§
§doc_recall: Option<TopkMetrics>
Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery], If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
doc_precision: Option<TopkMetrics>
Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery], If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
doc_ndcg: Option<TopkMetrics>
Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery] with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery]: * DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13 * Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63 * NDCG@3: 1.13/1.63 = 0.693
page_recall: Option<TopkMetrics>
Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery], if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
page_ndcg: Option<TopkMetrics>
Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery] with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for [SampleQuery][google.cloud.discoveryengine.v1alpha.SampleQuery]: * DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13 * Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63 * NDCG@3: 1.13/1.63 = 0.693
Trait Implementations§
source§impl Clone for QualityMetrics
impl Clone for QualityMetrics
source§fn clone(&self) -> QualityMetrics
fn clone(&self) -> QualityMetrics
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for QualityMetrics
impl Debug for QualityMetrics
source§impl Default for QualityMetrics
impl Default for QualityMetrics
source§impl Message for QualityMetrics
impl Message for QualityMetrics
source§fn encoded_len(&self) -> usize
fn encoded_len(&self) -> usize
source§fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
fn encode(&self, buf: &mut impl BufMut) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
fn encode_length_delimited(
&self,
buf: &mut impl BufMut,
) -> Result<(), EncodeError>where
Self: Sized,
source§fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
fn encode_length_delimited_to_vec(&self) -> Vec<u8>where
Self: Sized,
source§fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
fn decode_length_delimited(buf: impl Buf) -> Result<Self, DecodeError>where
Self: Default,
source§fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
. Read moresource§fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
fn merge_length_delimited(&mut self, buf: impl Buf) -> Result<(), DecodeError>where
Self: Sized,
self
.source§impl PartialEq for QualityMetrics
impl PartialEq for QualityMetrics
source§fn eq(&self, other: &QualityMetrics) -> bool
fn eq(&self, other: &QualityMetrics) -> bool
self
and other
values to be equal, and is used
by ==
.impl Copy for QualityMetrics
impl StructuralPartialEq for QualityMetrics
Auto Trait Implementations§
impl Freeze for QualityMetrics
impl RefUnwindSafe for QualityMetrics
impl Send for QualityMetrics
impl Sync for QualityMetrics
impl Unpin for QualityMetrics
impl UnwindSafe for QualityMetrics
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request