Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 257


You recently trained an XGBoost model on tabular data. You plan to expose the model for internal use as an HTTP microservice. After deployment, you expect a small number of incoming requests. You want to productionize the model with the least amount of effort and latency. What should you do?

Show Answer
Correct Answer: D

Using a prebuilt XGBoost Vertex container is the most efficient choice for deploying an XGBoost model in a production environment. It reduces the need for custom container creation and management, thereby requiring minimal effort. Deploying to Vertex AI Endpoints ensures low latency and high availability, making this option optimal for quickly setting up a reliable HTTP microservice.

Discussion

4 comments
Sign in to comment
pikachu007Option: D
Jan 13, 2024

Prebuilt Container: It eliminates the need to build and manage a custom container, reducing development time and complexity. Vertex AI Endpoints: It provides a managed serving infrastructure with low latency and high availability, optimizing performance for predictions. Minimal Effort: It involves simple steps of creating a Vertex model and deploying it to an endpoint, streamlining the process.

b1a8faeOption: D
Jan 22, 2024

Bit lost here. I would discard buiding a Flask app since that is the opposite of "minimum effort". Between A and D, I guess a prebuilt container (D) involves less effort, but I am not 100% confident.

fitri001Option: D
Apr 17, 2024

Package the Model: Use a library like xgboost-server to create a minimal server for your XGBoost model. This package helps convert your model into a format suitable for serving predictions through an HTTP endpoint. Deploy to Cloud Functions: Deploy the packaged model server as a Cloud Function on Google Cloud Platform (GCP). Cloud Functions are serverless, lightweight execution environments ideal for event-driven applications like microservices. Configure Trigger: Set up an HTTP trigger for your Cloud Function, allowing it to be invoked through HTTP requests.

AzureDP900Option: D
Jul 5, 2024

Option D is correct : Using a prebuilt XGBoost Vertex container (Option D) is the most straightforward approach. This container is specifically designed for running XGBoost models in production environments and can be easily deployed to Vertex AI Endpoints. This will allow you to expose your model as an HTTP microservice with minimal additional work.