Interface PredictionServiceGrpc.AsyncService

    • Method Detail

      • predict

        default void predict​(PredictRequest request,
                             io.grpc.stub.StreamObserver<PredictResponse> responseObserver)
         Perform an online prediction.
         
      • rawPredict

        default void rawPredict​(RawPredictRequest request,
                                io.grpc.stub.StreamObserver<com.google.api.HttpBody> responseObserver)
         Perform an online prediction with an arbitrary HTTP payload.
         The response includes the following HTTP headers:
         * `X-Vertex-AI-Endpoint-Id`: ID of the
         [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this
         prediction.
         * `X-Vertex-AI-Deployed-Model-Id`: ID of the Endpoint's
         [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served
         this prediction.
         
      • serverStreamingPredict

        default void serverStreamingPredict​(StreamingPredictRequest request,
                                            io.grpc.stub.StreamObserver<StreamingPredictResponse> responseObserver)
         Perform a server-side streaming online prediction request for Vertex
         LLM streaming.
         
      • explain

        default void explain​(ExplainRequest request,
                             io.grpc.stub.StreamObserver<ExplainResponse> responseObserver)
         Perform an online explanation.
         If
         [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id]
         is specified, the corresponding DeployModel must have
         [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec]
         populated. If
         [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id]
         is not specified, all DeployedModels must have
         [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec]
         populated.