Exam Professional Machine Learning Engineer All QuestionsBrowse all questions from this exam
Question 59

Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort?

    Correct Answer: A

    Kubeflow Pipelines is specifically designed to execute and manage machine learning experiments. It includes built-in features for tracking, monitoring, and querying metrics, which minimizes the manual effort required. This makes it an ideal choice for rapidly experimenting with various features, model architectures, and hyperparameters, while effectively keeping track of accuracy metrics over time.

Discussion
DunnothOption: A

Old answer is A. New answer (not available) would be Virtex AI experiments which comes with monitoring API inbuilt. https://cloud.google.com/blog/topics/developers-practitioners/track-compare-manage-experiments-vertex-ai-experiments

Celia20210714Option: A

ANS: A https://codelabs.developers.google.com/codelabs/cloud-kubeflow-pipelines-gis Kubeflow Pipelines (KFP) helps solve these issues by providing a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. Cloud AI Pipelines makes it easy to set up a KFP installation.

Mickey321Option: C

either A or C but going with C due to minimal effort

ares81Option: C

Vertex AI Experiments + Cloud Monitoring for the metrics. It's C!

tavva_prudhviOption: A

Option C suggests using AI Platform Training to execute the experiments and write the accuracy metrics to Cloud Monitoring. While Cloud Monitoring can be used to monitor and collect metrics from various services in Google Cloud, it is not specifically designed for machine learning experiments tracking. Using Cloud Monitoring for tracking machine learning experiments may not provide the same level of functionality and flexibility as Kubeflow Pipelines or AI Platform Training. Additionally, querying the results from Cloud Monitoring may not be as straightforward as using the APIs provided by Kubeflow Pipelines or AI Platform Training. Therefore, while Cloud Monitoring can be used as a general-purpose monitoring solution, it may not be the best option for tracking and reporting machine learning experiments.

M25Option: A

Went with A

Mohamed_MossadOption: A

kubeflow pipelines has already experiment tracking API , so A is the correct , B is valid also but the question states "minimizing manual effort"

Mohamed_Mossad

https://www.kubeflow.org/docs/components/pipelines/introduction/#what-is-kubeflow-pipelines

San1111111111Option: B

Shoudlnt it be B? VAI has inbuilt VAI experiments and metadata to track metrics..

dija123Option: A

Should agree with A

PhilipKokuOption: A

A) Kubeflow pipelines

LitingOption: A

I agree with tavva_prudhvi that cloud monitoring is not the best option to do machine learning tracking, Metadata is a better option for that purpose

PST21Option: A

Cloud monitoring may not be the most suitable option for tracking and reporting experiments, only because of this option C is out & I stick to A

lucaluca1982Option: B

It is B

John_Pongthorn

This is the question, Try out and choose what is the closet to this lab.Last updated Jan 21, 2023 https://codelabs.developers.google.com/vertex_experiments_pipelines_intro#0

John_Pongthorn

As The lab walk me through how to create pipe line to experiment , it use Kubeflow and apply experiment SDK

mymy9418Option: C

I like C https://cloud.google.com/monitoring/mql

PancyOption: C

C: Google has already provided inhouse monitoring mechanism so no need to query or use any other tool. https://cloud.google.com/bigquery/docs/monitoring

Mohamed_MossadOption: A

https://www.kubeflow.org/docs/components/pipelines/introduction/#what-is-kubeflow-pipelines