Package com.google.cloud.automl.v1beta1
Interface PredictionServiceGrpc.AsyncService
-
- All Known Implementing Classes:
PredictionServiceGrpc.PredictionServiceImplBase
- Enclosing class:
- PredictionServiceGrpc
public static interface PredictionServiceGrpc.AsyncService
AutoML Prediction API. On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
-
-
Method Summary
All Methods Instance Methods Default Methods Modifier and Type Method Description default void
batchPredict(BatchPredictRequest request, io.grpc.stub.StreamObserver<com.google.longrunning.Operation> responseObserver)
Perform a batch prediction.default void
predict(PredictRequest request, io.grpc.stub.StreamObserver<PredictResponse> responseObserver)
Perform an online prediction.
-
-
-
Method Detail
-
predict
default void predict(PredictRequest request, io.grpc.stub.StreamObserver<PredictResponse> responseObserver)
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads: * Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB. * Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB. * Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded. * Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded. * Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded. * Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type]. * Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
-
batchPredict
default void batchPredict(BatchPredictRequest request, io.grpc.stub.StreamObserver<com.google.longrunning.Operation> responseObserver)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1beta1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1beta1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML problems: * Image Classification * Image Object Detection * Video Classification * Video Object Tracking * Text Extraction * Tables
-
-