Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 241


You have created a Vertex AI pipeline that automates custom model training. You want to add a pipeline component that enables your team to most easily collaborate when running different executions and comparing metrics both visually and programmatically. What should you do?

Show Answer
Correct Answer: BC

To enable easy collaboration when running different executions and comparing metrics both visually and programmatically within a Vertex AI pipeline, logging metrics to Vertex ML Metadata and using Vertex AI Experiments and Vertex AI TensorBoard is the most suitable approach. Vertex AI Experiments allows comparing different executions effectively and Vertex AI TensorBoard is tailored for visualizing metrics, making them ideal for tracking and comparing experimentation results in a clear and cooperative manner.

Discussion

4 comments
Sign in to comment
fitri001Option: A
Apr 17, 2024

Why A? BigQuery: Stores pipeline metrics from different executions in a central location, allowing easy access for team members. BigQuery Queries: Enables programmatic comparison of metrics across runs using SQL queries. Looker Studio: Provides a collaborative visualization platform for team members to explore and compare metrics visually. why not C? Vertex AI Experiments and TensorBoard: While Vertex AI Experiments can leverage ML Metadata for lineage tracking, it's not ideal for general metric comparison. TensorBoard is primarily for visualizing training data during the pipeline execution, not comparing results across runs.

pinimichele01
Apr 21, 2024

why log on BQ and not to MetadataAI?

asmgi
Jul 14, 2024

Isn't BQ too much for a dozen of metrics?

gscharlyOption: C
Apr 20, 2024

went with C. Experiments can be used to compare executions and metrics

b1a8faeOption: C
Jan 18, 2024

Clearly C.

winston9Option: C
Jan 8, 2024

C is the correct one here