Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 40


You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using AI Platform, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take? (Choose two.)

Show Answer
Correct Answer: CE

To speed up the hyperparameter tuning job without significantly compromising its effectiveness, you should set the early stopping parameter to TRUE and decrease the maximum number of trials during subsequent training phases. Enabling early stopping allows the tuning process to terminate trials that are not producing promising results, which prevents wasting time on unpromising trials and can significantly speed up the process. Decreasing the maximum number of trials can also reduce the tuning time, as it limits the total number of trials that need to be performed. Both of these actions can effectively reduce the time required for hyperparameter tuning without compromising the quality of the results.

Discussion

17 comments
Sign in to comment
gcp2021goOptions: CE
Aug 2, 2021

I think should CE. I can't find any reference regarding B can reduce tuning time.

Paul_DiracOptions: BC
Jun 26, 2021

Answer: B & C (Ref: https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning) (A) Decreasing the number of parallel trials will increase tuning time. (D) Bayesian search works better and faster than random search since it's selective in points to evaluate and uses knowledge of previouls evaluated points. (E) maxTrials should be larger than 10*the number of hyperparameters used. And spanning the whole minimum space (10*num_hyperparams) already takes some time. So, lowering maxTrials has little effect on reducing tuning time.

dxxdd7
Sep 4, 2021

In your link, when they mentionned maxTrials they said that "In most cases there is a point of diminishing returns after which additional trials have little or no effect on the accuracy" They also say that it can affect time and cost I think i'd rather go with CE

Goosemoose
Jun 6, 2024

Bayesian search should cost more time, because it can converge in fewer iterations than the other algorithms but not necessarily in a faster time because trials are dependent and thus require sequentiality

Voyager2Options: CE
Jun 7, 2023

C&E This video explains very well the max trials and parallel trials https://youtu.be/8hZ_cBwNOss This link explains early stopping See https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#early-stopping

Sum_SumOptions: CD
Nov 15, 2023

Chat GPT says: . Set the early stopping parameter to TRUE. Early Stopping: Enabling early stopping allows the tuning process to terminate a trial if it becomes clear that it's not producing promising results. This prevents wasting time on unpromising trials and can significantly speed up the hyperparameter tuning process. It helps to focus resources on more promising parameter combinations. D. Change the search algorithm from Bayesian search to random search. Random Search Algorithm: Random search, as opposed to Bayesian optimization, doesn't attempt to build a model of the objective function. While Bayesian search can be more efficient in finding the optimal parameters, random search is often faster per iteration. Random search can be particularly effective when the hyperparameter space is large, as it doesn't require as much computational power to select the next set of parameters to evaluate.

fragkrisOptions: CD
Dec 5, 2023

I chose C and D

pawan94Options: CE
Jan 7, 2024

C and E, if you reference the latest docs of hptune job on vertex ai : 1. A not possible (refer: https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#:~:text=the%20benefit%20of%20reducing%20the%20time%20the) , if you reduce the number of parallel trials then the speed of overall completion gets negatively affected. . The question is about how to speed up the process but not changing the model params. Changing the optimization algorithm would lead to unexpected results. So in my opinion C and E ( after carefully reading the updated docs) and please don't believe everything CHATGPT says . I encountered so many questions where the LLM's are giving completely wrong answers

Mohamed_MossadOptions: CE
Jul 9, 2022

Answer C,E ========= Explanation : A. Decrease the number of parallel trials : doing this will of course make Hypertuning take more time , we need to increase parallel trials not decrease B.Decrease the range of floating-point values : theoretically this should speed up the computation but this is not the most correct answer C. Set the early stopping parameter to TRUE : this is very good option D. Change the search algorithm from Bayesian search to random search : also searching the search algorithm will not have a great impact E. Decrease the maximum number of trials during subsequent training phases : very good option

FatiyOptions: AD
Feb 28, 2023

The two actions that can speed up hyperparameter tuning without compromising effectiveness are decreasing the number of parallel trials and changing the search algorithm from Bayesian search to random search.

pinimichele01Options: CE
Apr 14, 2024

see pawan94

David_mlOptions: CE
May 9, 2022

CE for me.

shankalman717Options: CD
Feb 21, 2023

B. Decrease the range of floating-point values: Reducing the range of the hyperparameters will decrease the search space and the time it takes to find the optimal hyperparameters. However, if the range is too narrow, it may not be possible to find the best hyperparameters. C. Set the early stopping parameter to TRUE: Setting the early stopping parameter to true will stop the trial when the performance has stopped improving. This will help to reduce the number of trials needed and thus speed up the hypertuning job without compromising its effectiveness. D.Changing the search algorithm from Bayesian search to random search could also be a valid action to speed up the hypertuning job. Random search can explore the hyperparameter space more efficiently and with less computation cost compared to Bayesian search, especially when the search space is large and complex. However, it may not be as effective as Bayesian search in finding the best hyperparameters in some cases.

tavva_prudhvi
Mar 8, 2023

D might not be the correct option, as for random search it might be faster but there might be a chance of decreased accuracy and this violates the questions as it says, to not comprise efficiency!

Yajnas_arpohcOptions: DE
Mar 24, 2023

Early stopping is for training, not hyperparameter tuning

kucuk_kaganOptions: AD
Mar 31, 2023

To speed up the tuning job without significantly compromising its effectiveness, you can take the following actions: A. Decrease the number of parallel trials: By reducing the number of parallel trials, you can limit the amount of computational resources being used at a given time, which may help speed up the tuning job. However, reducing the number of parallel trials too much could limit the exploration of the parameter space and result in suboptimal results. D. Change the search algorithm from Bayesian search to random search: Bayesian optimization is a computationally intensive method that requires more time and resources than random search. By switching to a simpler method like random search, you may be able to speed up the tuning job without compromising its effectiveness. However, random search may not be as efficient in finding the best hyperparameters as Bayesian optimization.

M25Options: CE
May 9, 2023

Went with C & E

CloudKidaOptions: AC
May 9, 2023

Running parallel trials has the benefit of reducing the time the training job takes (real time—the total processing time required is not typically changed). However, running in parallel can reduce the effectiveness of the tuning job overall. That is because hyperparameter tuning uses the results of previous trials to inform the values to assign to the hyperparameters of subsequent trials. When running in parallel, some trials start without having the benefit of the results of any trials still running. You can specify that AI Platform Training must automatically stop a trial that has become clearly unpromising. This saves you the cost of continuing a trial that is unlikely to be useful. To permit stopping a trial early, set the enableTrialEarlyStopping value in the HyperparameterSpec to TRUE.

rexduoOptions: CE
May 21, 2023

A increase time, B HP tuning job normally bottle neck is not at model size, D did reduce time, but might significantly hurt effectiveness

PhilipKokuOptions: CD
Jun 6, 2024

C) and D)