Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 64


You recently designed and built a custom neural network that uses critical dependencies specific to your organization’s framework. You need to train the model using a managed training service on Google Cloud. However, the ML framework and related dependencies are not supported by AI Platform Training. Also, both your model and your data are too large to fit in memory on a single machine. Your ML framework of choice uses the scheduler, workers, and servers distribution structure. What should you do?

Show Answer
Correct Answer: C

When you need to use a custom neural network with critical dependencies specific to your organization's framework that are not supported by AI Platform Training, building custom containers is the appropriate approach. Additionally, since your model and data are too large to fit in memory on a single machine and your ML framework utilizes a scheduler, workers, and servers distribution structure, using custom containers to run distributed training jobs will address both dependency and memory issues effectively.

Discussion

9 comments
Sign in to comment
mil_spyroOption: C
Dec 18, 2022

Answer C. By running your machine learning (ML) training job in a custom container, you can use ML frameworks, non-ML dependencies, libraries, and binaries that are not otherwise supported on Vertex AI. Model and your data are too large to fit in memory on a single machine hence distributed training jobs. https://cloud.google.com/vertex-ai/docs/training/containers-overview

MultiCloudIronManOption: C
Apr 1, 2024

This allows using external dependences and distributed training will solve the memory issues

Werner123Option: C
Feb 29, 2024

Critical dependencies that are not supported -> Custom container Too large to fit in memory on a single machine -> Distributed

VedjhaOption: C
Dec 7, 2022

Will go for 'C'- Custom containers can address the env limitation and distributed processing will handle the data volume

JeanElOption: C
Dec 9, 2022

I think it's C

ares81Option: C
Dec 11, 2022

C, for me!

wish0035Option: C
Dec 16, 2022

ans: C A, D => too much work. B => discarded because "model and your data are too large to fit in memory on a single machine"

M25Option: C
May 9, 2023

Went with C

PhilipKokuOption: C
Jun 6, 2024

C) Distributed training with customer containers