Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 50


Your team is building a convolutional neural network (CNN)-based architecture from scratch. The preliminary experiments running on your on-premises CPU-only infrastructure were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines (VMs) on Google Cloud to leverage more powerful hardware. Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction. Which environment should you train your model on?

Show Answer
Correct Answer: C

To speed up model training for a convolutional neural network (CNN) and to avoid manual setup, a Deep Learning VM with pre-installed libraries and a GPU is the ideal choice. This environment allows the use of powerful GPU hardware, which is critical for efficient training of CNN models, and comes with all necessary dependencies pre-installed, reducing the overhead associated with manual installations. The n1-standard-2 machine with one GPU provides a good balance for such preliminary experiments, offering substantial computational power without the complexity of manual device placement.

Discussion

17 comments
Sign in to comment
celia20200410Option: C
Jul 20, 2021

ANS: C to support CNN, you should use GPU. for preliminary experiment, pre-installed pkgs/libs are good choice. https://cloud.google.com/deep-learning-vm/docs/cli#creating_an_instance_with_one_or_more_gpus https://cloud.google.com/deep-learning-vm/docs/introduction#pre-installed_packages

Paul_DiracOption: C
Aug 1, 2021

Code without manual device placement => default to CPU if TPU is present or to the lowest order GPU if multiple GPUs are present. => Not A, B. D: already using CPU and needing GPU for CNN. Ans: C

suresh_vnOption: D
Aug 24, 2022

"has not been wrapped in Estimator model-level abstraction" How you can use GPU? D in my opinion, E-family using for high CPU tasks

shankalman717Option: D
Feb 22, 2023

Critical sentece: Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction. So only answer we have. it's D.

tavva_prudhvi
Jul 3, 2023

Option D provides a more powerful CPU but does not include a GPU, which may not be optimal for deep learning training.

BenMSOption: D
Feb 27, 2023

Critical sentence: Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction. So only answer we have. it's D.

Sum_SumOption: C
Nov 15, 2023

Agree with celia20200410 - C

NamitSehgalOption: C
Jan 4, 2022

C is correct

mmona19Option: A
Apr 14, 2022

the question is asking speed up time to market which can happen if model trains fast. so TPU VM can be a solution. https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms option A. if question asks most managed way than answer is deep learning container with everything installed. C

tavva_prudhvi
Jul 3, 2023

Option A with 1 TPU and option B with 8 GPUs might provide even faster training, but since the code does not include manual device placement, it may not utilize all the available resources effectively.

maukaba
Sep 22, 2023

Instead If you have a single GPU, TensorFlow will use this accelerator to speed up model training with no extra work on your part: https://codelabs.developers.google.com/vertex-p2p-distributed#2 Normally you don't use just one TPU and for both GPUs and TPUs it is necessary to define a distributed training strategy: https://www.tensorflow.org/guide/distributed_training

Mohamed_MossadOption: C
Jul 11, 2022

Answer C ======== Explanation "speed up model training" will make us biased towards GPU,TPU options by options eliminations we may need to stay away of any manual installations , so using preconfigered deep learning will speed up time to market

ares81Option: C
Jan 5, 2023

It's C.

SergioRubianoOption: C
Mar 31, 2023

You should use GPU.

MelamposOption: A
Apr 20, 2023

thinking in fastest way

M25Option: C
May 9, 2023

Went with C

LitingOption: C
Jul 7, 2023

Should use the deep learning VM with GPU. TPU should be selected only if necessary, coz it incurs high cost. GPU in this case is enough.

Mickey321Option: D
Nov 15, 2023

keyword: Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction.

gscharlyOption: C
Apr 20, 2024

Agree with celia20200410 - C

PhilipKokuOption: C
Jun 6, 2024

C) GPU and all pre-installed libraries.