Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 81


Your data science team has requested a system that supports scheduled model retraining, Docker containers, and a service that supports autoscaling and monitoring for online prediction requests. Which platform components should you choose for this system?

Show Answer
Correct Answer: B

The most suitable choice for supporting scheduled model retraining, Docker containers, and autoscaling with monitoring for online prediction requests is Vertex AI Pipelines, Vertex AI Prediction, and Vertex AI Model Monitoring. Vertex AI Pipelines can manage the automation of your ML workflows, including the scheduling required for retraining models. Vertex AI Prediction enables deployments that can leverage Docker containers for serving ML models. Vertex AI Model Monitoring provides the necessary monitoring capabilities for online prediction requests. This combination covers all the requirements specified in the question.

Discussion

12 comments
Sign in to comment
John_PongthornOption: B
Jan 27, 2023

The Cloud Compose may be good consideration if you are involved in getting Google Data Engineer Cert App enging is relevant to Dev-Op Cert Pls. if you know a bit about ML Google Cloud, we are preparing to take Google ML Cert, if there is no specifically particular requirement in the question. We must emphasize on use of Vertext AI as much as possible.

LearnSodasOption: B
Dec 11, 2022

Everything is possible on Vetex AI

mil_spyro
Dec 20, 2022

Scheduling is not possible without the Cloud Scheduler https://cloud.google.com/vertex-ai/docs/pipelines/schedule-cloud-scheduler

hiromi
Dec 23, 2022

I think Vertex AI Pipeline includes schedule/trigger runs, so my vote is B

hiromiOption: B
Dec 18, 2022

Vote for B

M25Option: B
May 9, 2023

Went with B

rosenr0Option: B
May 28, 2023

B. Vertext AI also supports Docker container https://cloud.google.com/vertex-ai/docs/training/containers-overview

ares81Option: C
Dec 14, 2022

Serve Vertex AI Prediction, but the monitoring in the question is not the one of the answer B. (that is connected to the modeol). The correct answer is C.

ares81
Jan 4, 2023

I changed my mind. It's D.

mil_spyroOption: D
Dec 17, 2022

D is the only option that provides scheduled model retraining

behzadswOption: B
Jan 6, 2023

Vote for B

Sas02Option: A
Apr 23, 2023

Shouldn't it be A? https://cloud.google.com/appengine/docs/standard/scheduling-jobs-with-cron-yaml

e707Option: D
Apr 27, 2023

I think it's D. B does not support Docker containers, does it?

e707
May 10, 2023

I can't change the voting but It's B.

CloudKidaOption: D
May 9, 2023

A custom container is a Docker image that you create to run your training application. By running your machine learning (ML) training job in a custom container, you can use ML frameworks, non-ML dependencies, libraries, and binaries that are not otherwise supported on Vertex AI. so we need vertex ai custom container for docker container. Thus option A and B are omitted . App Engine allows developers to focus on what they do best: writing code. Based on Compute Engine, the App Engine flexible environment automatically scales your app up and down while also balancing the load. Customizable infrastructure - App Engine flexible environment instances are Compute Engine virtual machines, which means that you can take advantage of custom libraries, use SSH for debugging, and deploy your own Docker containers.

PhilipKokuOption: B
Jun 7, 2024

B) Vertex AI Pipelines