Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 175


You have recently trained a scikit-learn model that you plan to deploy on Vertex AI. This model will support both online and batch prediction. You need to preprocess input data for model inference. You want to package the model for deployment while minimizing additional code. What should you do?

Show Answer
Correct Answer: B

To package a scikit-learn model for deployment on Vertex AI while minimizing additional code, you should wrap the model in a custom prediction routine (CPR) and build a container image from the CPR local model. This approach allows the inclusion of preprocessing steps without requiring the manual setup of an HTTP server or building a container from scratch. Upload the scikit-learn model container to Vertex AI Model Registry, then deploy the model to Vertex AI Endpoints, and create a Vertex AI batch prediction job. This method leverages the ease of the custom prediction routine which simplifies the deployment process.

Discussion

6 comments
Sign in to comment
shadz10Option: B
Jan 14, 2024

B - Creating a custom container without CPR adds additional complexity. i.e. write model server write dockerfile and also build and upload image. Where as using a CPR only requires writing a predictor and using vertex SDK to build image. https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines

b1a8faeOption: B
Jan 10, 2024

I go with B: “Custom prediction routines (CPR) lets you build custom containers with pre/post processing code easily, without dealing with the details of setting up an HTTP server or building a container from scratch.” (https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines). This alone makes B preferable to C and D, provided lack of complex model architecture. Regarding A, pre-built containers only allow serving predictions, but not preprocessing of data (https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers#use_a_prebuilt_container). B thus remains the most likely option.

pikachu007Option: D
Jan 11, 2024

Considering the goal of minimizing additional code and complexity, option D - "Create a custom container for your scikit-learn model, upload your model and custom container to Vertex AI Model Registry, deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data" seems to be a more straightforward and efficient approach. It involves customizing the container for the scikit-learn model, leveraging the Vertex AI Model Registry, and utilizing the specified instance type for batch prediction without introducing unnecessary complexity like custom prediction routines.

guilhermebutzkeOption: C
Feb 7, 2024

My choose: C Option C ensures that the scikit-learn model is properly packaged, deployed, and integrated with Vertex AI services while minimizing the need for additional code beyond what is necessary for customizing the serving function. Option B is not considered correct because it suggests wrapping the scikit-learn model in a custom prediction routine (CPR), which might not be the most suitable approach for deploying scikit-learn models on Vertex AI. Options A and D using InstanceConfig, that is limited for preprocessing. Uploading the container without a serving function won't work.

gscharlyOption: B
Apr 21, 2024

agree with shadz10

bobjrOption: B
Jun 6, 2024

https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines