Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 179


You recently used XGBoost to train a model in Python that will be used for online serving. Your model prediction service will be called by a backend service implemented in Golang running on a Google Kubernetes Engine (GKE) cluster. Your model requires pre and postprocessing steps. You need to implement the processing steps so that they run at serving time. You want to minimize code changes and infrastructure maintenance, and deploy your model into production as quickly as possible. What should you do?

Show Answer
Correct Answer: AC

Using the Predictor interface to implement a custom prediction routine allows you to include the preprocessing and postprocessing steps within the same deployment package as your model. This approach ensures that all necessary processing is managed within a single custom container, simplifying deployment and reducing the need for significant code changes and infrastructure maintenance. By uploading the container to Vertex AI Model Registry and deploying it to a Vertex AI endpoint, you can leverage Vertex AI's managed services for efficient and streamlined production deployment.

Discussion

9 comments
Sign in to comment
ddoggOption: C
Jan 31, 2024

Use the Predictor interface to implement a custom prediction routine. This allows you to include the preprocessing and postprocessing steps in the same deployment package as your model. Build the custom container, which packages your model and the associated preprocessing and postprocessing code together, simplifying deployment. Upload the container to Vertex AI Model Registry. This makes your model available for deployment on Vertex AI. Deploy it to a Vertex AI endpoint. This allows your model to be used for online serving. https://blog.thecloudside.com/custom-predict-routines-in-vertex-ai-46a7473c95db

36bdc1eOption: C
Jan 13, 2024

C . Build the custom container, upload the container to Vertex AI Model Registry, and deploy it to a Vertex AI endpoint. This option allows you to leverage the power and simplicity of Vertex AI to serve your XGBoost model with minimal effort and customization. Vertex AI is a unified platform for building and deploying machine learning solutions on Google Cloud. Vertex AI can deploy a trained XGBoost model to an online prediction endpoint, which can provide low-latency predictions for individual instances. A custom prediction routine (CPR) is a Python script that defines the logic for preprocessing the input data, running the prediction, and postprocessing the output data.

guilhermebutzkeOption: C
Feb 11, 2024

My answer C: Considering pre- and postprocessing implementation, The option C directly deals with implementing the processing steps in a custom container, offering full control over their placement and execution. This documentation says: “Custom prediction routines (CPR) lets you build [custom containers](https://cloud.google.com/vertex-ai/docs/predictions/use-custom-container) with pre/post processing code easily, without dealing with the details of setting up an HTTP server or building a container from scratch.” https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines So, it is better to use C instead of A or B. D is better because it offers the option of pre and post-processing, which is not available in D due to its use of prebuilt serving.

vale_76_na_xxxOption: D
Jan 8, 2024

I would say D

pikachu007Option: D
Jan 11, 2024

Considering the goal of minimizing code changes, infrastructure maintenance, and quickly deploying the model into production, option D seems to be a pragmatic approach. It leverages the prebuilt XGBoost serving container in Vertex AI, providing a managed environment for serving. The pre- and postprocessing steps can be implemented in the Golang backend service, maintaining consistency with the existing Golang implementation and reducing the need for significant code changes.

Yan_XOption: D
Mar 13, 2024

し Pre-built XGBoost container already includes pre- and postprocessing steps.

livewalkOption: B
May 27, 2024

FastAPI allows to create a lightweight HTTP server with minimal code.

AzureDP900Option: C
Jun 21, 2024

Option C is a good choice if You have specific requirements for preprocessing or postprocessing that can't be met by the prebuilt XGBoost serving container. You need more control over the deployment process or want to integrate with other services. You're comfortable building and managing custom containers. However, if you just want a simple, straightforward way to deploy your model as a RESTful API, Option D (using the XGBoost prebuilt serving container) might be a better fit!

PrakzzOption: B
Jul 2, 2024

This approach minimizes code changes and infrastructure maintenance by leveraging Vertex AI's managed services for deployment. Implementing the preprocessing and postprocessing steps in a FastAPI server within a Docker container allows you to handle these steps at serving time efficiently. Deploying this Docker image to a Vertex AI endpoint simplifies the deployment process and reduces the burden of managing the infrastructure.