Struct googapis::google::cloud::automl::v1beta1::prediction_service_client::PredictionServiceClient [−][src]
pub struct PredictionServiceClient<T> { /* fields omitted */ }
Expand description
AutoML Prediction API.
On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
Implementations
impl<T> PredictionServiceClient<T> where
T: GrpcService<BoxBody>,
T::ResponseBody: Body + Send + 'static,
T::Error: Into<StdError>,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
impl<T> PredictionServiceClient<T> where
T: GrpcService<BoxBody>,
T::ResponseBody: Body + Send + 'static,
T::Error: Into<StdError>,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
pub fn with_interceptor<F>(
inner: T,
interceptor: F
) -> PredictionServiceClient<InterceptedService<T, F>> where
F: Interceptor,
T: Service<Request<BoxBody>, Response = Response<<T as GrpcService<BoxBody>>::ResponseBody>>,
<T as Service<Request<BoxBody>>>::Error: Into<StdError> + Send + Sync,
Compress requests with gzip
.
This requires the server to support it otherwise it might respond with an error.
Enable decompressing responses with gzip
.
pub async fn predict(
&mut self,
request: impl IntoRequest<PredictRequest>
) -> Result<Response<PredictResponse>, Status>
pub async fn predict(
&mut self,
request: impl IntoRequest<PredictRequest>
) -> Result<Response<PredictResponse>, Status>
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads:
- Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded.
- Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded.
- Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded.
- Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING
[prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type].
- Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
pub async fn batch_predict(
&mut self,
request: impl IntoRequest<BatchPredictRequest>
) -> Result<Response<Operation>, Status>
pub async fn batch_predict(
&mut self,
request: impl IntoRequest<BatchPredictRequest>
) -> Result<Response<Operation>, Status>
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1beta1.PredictionService.Predict], batch prediction result won’t be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1beta1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Trait Implementations
Auto Trait Implementations
impl<T> RefUnwindSafe for PredictionServiceClient<T> where
T: RefUnwindSafe,
impl<T> Send for PredictionServiceClient<T> where
T: Send,
impl<T> Sync for PredictionServiceClient<T> where
T: Sync,
impl<T> Unpin for PredictionServiceClient<T> where
T: Unpin,
impl<T> UnwindSafe for PredictionServiceClient<T> where
T: UnwindSafe,
Blanket Implementations
Mutably borrows from an owned value. Read more
Wrap the input message T
in a tonic::Request
pub fn vzip(self) -> V
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more