Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 31


You need to train a computer vision model that predicts the type of government ID present in a given image using a GPU-powered virtual machine on Compute

Engine. You use the following parameters:

✑ Optimizer: SGD

✑ Image shape = 224ֳ—224

✑ Batch size = 64

✑ Epochs = 10

✑ Verbose =2

During training you encounter the following error: ResourceExhaustedError: Out Of Memory (OOM) when allocating tensor. What should you do?

Show Answer
Correct Answer: B

When encountering a ResourceExhaustedError: Out Of Memory (OOM) during the training of a computer vision model, reducing the batch size is a common and effective solution. Decreasing the batch size reduces the amount of memory required for each training iteration, helping to avoid memory overflow issues. Other adjustments, like changing the image shape, might also alleviate the problem, but they might also degrade the model's performance. Adjusting the optimizer or learning rate typically does not affect memory usage as significantly as altering the batch size.

Discussion

15 comments
Sign in to comment
maartenalexanderOption: B
Jun 22, 2021

B. I think you want to reduce batch size. Learning rate and optimizer shouldn't really impact memory utilisation. Decreasing image size (A) would work, but might be costly in terms final performance

guruguruOption: B
Jul 24, 2021

B. https://stackoverflow.com/questions/59394947/how-to-fix-resourceexhaustederror-oom-when-allocating-tensor/59395251#:~:text=OOM%20stands%20for%20%22out%20of,in%20your%20Dense%20%2C%20Conv2D%20layers

mousseUwUOption: B
Oct 20, 2021

B is correct, it uses less memory. A works too but depending on what you need you will loose perfomance (just like maartenalexander said) so I think it is not recommended.

kaike_reisOption: B
Nov 13, 2021

B is correct. Letter D can be used, as we reduced the image size but this will directly impact the model's performance. Another point is that when doing this, if you are using a model via Keras's `Functional API` you need to change the definition of the input and also apply pre-processing on the image to reduce its size . In other words: much more work than the letter B.

Mohamed_MossadOption: B
Jun 13, 2022

to fix memory overflow you need to reduce batch size also reduce input resolution is valid but reducing image size can harm model performance , so answer is B

george_ognyanovOption: B
Oct 5, 2021

Initially, I though D. ,decreasing image size, would be the correct one, but now that I am reviewing the test I think maartenalexander is correct in saying reduced image size might decrease final performance, so I'd go with B eventually.

alphardOption: B
Dec 7, 2021

B is my option. But, D seems not wrong. Reducing batch size or reducing image size bot can reduce memory usage. But, the former seems much easier.

seifouOption: B
Nov 20, 2022

The answer is B Since you are using an SGD, you can use a batch size of 1 ref: https://stackoverflow.com/questions/63139072/batch-size-for-stochastic-gradient-descent-is-length-of-training-data-and-not-1

M25Option: B
May 9, 2023

Went with B

John_PongthornOption: B
Feb 28, 2023

Reduce the image shape != Reduce the image Size.

FatiyOption: A
Feb 28, 2023

Creating alerts to monitor for skew in the input data can help to detect when the distribution of the data has changed and the model's performance is affected. Once a skew is detected, retraining the model with the new data can improve its performance.

Fatiy
Feb 28, 2023

Sorry it's not the response for this question. it's the response for the previous question.

FatiyOption: B
Feb 28, 2023

By reducing the batch size, the amount of memory required for each iteration of the training process is reduced

SergioRubianoOption: B
Mar 24, 2023

B is correct

SamuelTschOption: B
Jul 7, 2023

no doubt went to B

PhilipKokuOption: B
Jun 6, 2024

B) Reduce the batch size.