Exam Professional Machine Learning Engineer All QuestionsBrowse all questions from this exam
Question 61

You are using transfer learning to train an image classifier based on a pre-trained EfficientNet model. Your training dataset has 20,000 images. You plan to retrain the model once per day. You need to minimize the cost of infrastructure. What platform components and configuration environment should you use?

    Correct Answer: D

    To minimize the cost of infrastructure while retraining an image classifier based on a pre-trained EfficientNet model with 20,000 images daily, using an AI Platform Training job with a custom scale tier and 4 V100 GPUs and Cloud Storage is ideal. AI Platform Training can efficiently scale resources based on the job's requirements, which can be more cost-effective compared to the fixed resources of a VM or Kubernetes cluster. Cloud Storage is recommended over local storage or NFS for scalability and ease of access.

Discussion
wish0035Option: D

ans: D A, C => local storage, NFS... discarded. Google encourages you to use Cloud Storage. B => could do the job, but here I would focus on the "daily training" thing, because Vertex AI Training jobs are better for this. Also I think that Google usually encourages to use Vertex AI over VMs.

ares81Option: D

It seems D to me.

hiromiOption: D

it seems D

MdsoOption: A

I think it is A. Refer to Q20 of the GCP Sample Questions - they say managed services (such as Kubeflow Pipelines / Vertex AI) are not the options for 'minimizing costs'. In this case, you should configure your own infrastructure to train the model leaving A,B. Undecided between A,B because A would minimize costs, but also result in inefficient I/O operations during training.

tavva_prudhviOption: D

The pre-trained EfficientNet model can be easily loaded from Cloud Storage, which eliminates the need for local storage or an NFS server. Using AI Platform Training allows for the automatic scaling of resources based on the size of the dataset, which can save costs compared to using a fixed-size VM or node pool. Additionally, the ability to use custom scale tiers allows for fine-tuning of resource allocation to match the specific needs of the training job.

shankalman717Option: B

B. A Deep Learning VM with 4 V100 GPUs and Cloud Storage. For this scenario, a Deep Learning VM with 4 V100 GPUs and Cloud Storage is likely the most cost-effective solution while still providing sufficient computing resources for the model training. Using Cloud Storage can allow the model to be trained and the data to be stored in a scalable and cost-effective way. Option A, using a Deep Learning VM with local storage, may not provide enough storage capacity to store the training data and model checkpoints. Option C, using a Kubernetes Engine cluster, can be overkill for the size of the job and adds additional complexity. Option D, using an AI Platform Training job, is a good option as it is designed for running machine learning jobs at scale, but may be more expensive than a Deep Learning VM with Cloud Storage.

OzoneReloadedOption: D

I think it's D

JeanElOption: B

It's D

San1111111111Option: D

D because automatic scaling

PhilipKokuOption: D

D) Is the best answer

abhay669Option: D

I'll go with D. How is C correct?

Mickey321Option: A

D as need to minimize cost

M25Option: D

Went with D

enghabethOption: D

becouse it's cheap