Struct google_api_proto::google::ai::generativelanguage::v1beta::generative_service_client::GenerativeServiceClient
source · pub struct GenerativeServiceClient<T> { /* private fields */ }
Expand description
API for using Large Models that generate multimodal content and have additional capabilities beyond text generation.
Implementations§
source§impl<T> GenerativeServiceClient<T>where
T: GrpcService<BoxBody>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
impl<T> GenerativeServiceClient<T>where
T: GrpcService<BoxBody>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
pub fn new(inner: T) -> Self
pub fn with_origin(inner: T, origin: Uri) -> Self
pub fn with_interceptor<F>(
inner: T,
interceptor: F,
) -> GenerativeServiceClient<InterceptedService<T, F>>where
F: Interceptor,
T::ResponseBody: Default,
T: Service<Request<BoxBody>, Response = Response<<T as GrpcService<BoxBody>>::ResponseBody>>,
<T as Service<Request<BoxBody>>>::Error: Into<StdError> + Send + Sync,
sourcepub fn send_compressed(self, encoding: CompressionEncoding) -> Self
pub fn send_compressed(self, encoding: CompressionEncoding) -> Self
Compress requests with the given encoding.
This requires the server to support it otherwise it might respond with an error.
sourcepub fn accept_compressed(self, encoding: CompressionEncoding) -> Self
pub fn accept_compressed(self, encoding: CompressionEncoding) -> Self
Enable decompressing responses.
sourcepub fn max_decoding_message_size(self, limit: usize) -> Self
pub fn max_decoding_message_size(self, limit: usize) -> Self
Limits the maximum size of a decoded message.
Default: 4MB
sourcepub fn max_encoding_message_size(self, limit: usize) -> Self
pub fn max_encoding_message_size(self, limit: usize) -> Self
Limits the maximum size of an encoded message.
Default: usize::MAX
sourcepub async fn generate_content(
&mut self,
request: impl IntoRequest<GenerateContentRequest>,
) -> Result<Response<GenerateContentResponse>, Status>
pub async fn generate_content( &mut self, request: impl IntoRequest<GenerateContentRequest>, ) -> Result<Response<GenerateContentResponse>, Status>
Generates a model response given an input GenerateContentRequest
.
Refer to the text generation
guide for detailed
usage information. Input capabilities differ between models, including
tuned models. Refer to the model
guide and tuning
guide for details.
sourcepub async fn generate_answer(
&mut self,
request: impl IntoRequest<GenerateAnswerRequest>,
) -> Result<Response<GenerateAnswerResponse>, Status>
pub async fn generate_answer( &mut self, request: impl IntoRequest<GenerateAnswerRequest>, ) -> Result<Response<GenerateAnswerResponse>, Status>
Generates a grounded answer from the model given an input
GenerateAnswerRequest
.
sourcepub async fn stream_generate_content(
&mut self,
request: impl IntoRequest<GenerateContentRequest>,
) -> Result<Response<Streaming<GenerateContentResponse>>, Status>
pub async fn stream_generate_content( &mut self, request: impl IntoRequest<GenerateContentRequest>, ) -> Result<Response<Streaming<GenerateContentResponse>>, Status>
Generates a streamed
response
from the model given an input GenerateContentRequest
.
sourcepub async fn embed_content(
&mut self,
request: impl IntoRequest<EmbedContentRequest>,
) -> Result<Response<EmbedContentResponse>, Status>
pub async fn embed_content( &mut self, request: impl IntoRequest<EmbedContentRequest>, ) -> Result<Response<EmbedContentResponse>, Status>
Generates a text embedding vector from the input Content
using the
specified Gemini Embedding
model.
sourcepub async fn batch_embed_contents(
&mut self,
request: impl IntoRequest<BatchEmbedContentsRequest>,
) -> Result<Response<BatchEmbedContentsResponse>, Status>
pub async fn batch_embed_contents( &mut self, request: impl IntoRequest<BatchEmbedContentsRequest>, ) -> Result<Response<BatchEmbedContentsResponse>, Status>
Generates multiple embedding vectors from the input Content
which
consists of a batch of strings represented as EmbedContentRequest
objects.
sourcepub async fn count_tokens(
&mut self,
request: impl IntoRequest<CountTokensRequest>,
) -> Result<Response<CountTokensResponse>, Status>
pub async fn count_tokens( &mut self, request: impl IntoRequest<CountTokensRequest>, ) -> Result<Response<CountTokensResponse>, Status>
Runs a model’s tokenizer on input Content
and returns the token count.
Refer to the tokens guide
to learn more about tokens.
Trait Implementations§
source§impl<T: Clone> Clone for GenerativeServiceClient<T>
impl<T: Clone> Clone for GenerativeServiceClient<T>
source§fn clone(&self) -> GenerativeServiceClient<T>
fn clone(&self) -> GenerativeServiceClient<T>
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl<T> !Freeze for GenerativeServiceClient<T>
impl<T> RefUnwindSafe for GenerativeServiceClient<T>where
T: RefUnwindSafe,
impl<T> Send for GenerativeServiceClient<T>where
T: Send,
impl<T> Sync for GenerativeServiceClient<T>where
T: Sync,
impl<T> Unpin for GenerativeServiceClient<T>where
T: Unpin,
impl<T> UnwindSafe for GenerativeServiceClient<T>where
T: UnwindSafe,
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request