Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 154


You are developing a classification model to support predictions for your company’s various products. The dataset you were given for model development has class imbalance You need to minimize false positives and false negatives What evaluation metric should you use to properly train the model?

Show Answer
Correct Answer: A

When dealing with class imbalance and the need to minimize both false positives and false negatives, the F1 score is a suitable evaluation metric. The F1 score is the harmonic mean of precision and recall, effectively balancing the trade-off between falsely identifying negative cases as positive (false positives) and missing actual positive cases (false negatives). This balance makes the F1 score particularly effective for evaluating models where both types of errors are important to minimize.

Discussion

7 comments
Sign in to comment
AntmalOption: A
May 12, 2023

if there wasn't a class imbalance that C. Accuracy would have been the right answer. There A. F1-score which is harmonic mean of precision and recall, that balances the trade-off between precision and recall. It is useful when both false positives and false negatives are important as per the question at hand, and you want to optimize for both.

fitri001Option: A
Apr 22, 2024

Class Imbalance: When dealing with imbalanced data, metrics like accuracy can be misleading. A model that simply predicts the majority class all the time can achieve high accuracy, but it wouldn't be very useful for identifying the minority class (which is likely more important in this scenario). F1 Score: The F1 score is the harmonic mean of precision and recall. Precision measures the proportion of positive predictions that are actually correct, while recall measures the proportion of actual positive cases that are correctly identified. By considering both metrics, F1 score provides a balanced view of the model's performance in identifying both positive and negative cases. Minimizing False Positives and False Negatives: Since a high F1 score indicates a good balance between precision and recall, it translates to minimizing both false positives (incorrect positive predictions) and false negatives (missed positive cases).

nescafe7Option: A
May 24, 2023

class imbalance = F1 score

SamuelTschOption: A
Jul 8, 2023

F1 should be correct

PST21Option: B
Jul 20, 2023

both recall and F1 score are valuable metrics, but based on the question's specific requirement to minimize false positives and false negatives, recall (Option B) is the best answer. It directly focuses on reducing false negatives, which is crucial when dealing with class imbalance and minimizing the risk of missing important positive cases.

PST21Option: B
Jul 20, 2023

Recall (True Positive Rate): It measures the ability of the model to correctly identify all positive instances out of the total actual positive instances. High recall means fewer false negatives, which is desired when minimizing the risk of missing important positive cases. F1 Score: It is the harmonic mean of precision and recall. F1 score gives equal weight to both precision and recall and is suitable when you want a balanced metric. However, it might not be the best choice when the primary focus is on minimizing false positives and false negatives.

AzureDP900Option: A
Jun 21, 2024

In this case, you want to minimize both false positives and false negatives. The F1 score takes into account both the number of true positives and true negatives, making it a suitable choice for evaluating your model.