Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 195


You work for a retail company. You have been asked to develop a model to predict whether a customer will purchase a product on a given day. Your team has processed the company’s sales data, and created a table with the following rows:

• Customer_id

• Product_id

• Date

• Days_since_last_purchase (measured in days)

• Average_purchase_frequency (measured in 1/days)

• Purchase (binary class, if customer purchased product on the Date)

You need to interpret your model’s results for each individual prediction. What should you do?

Show Answer
Correct Answer: BD

To interpret your model's results for each individual prediction, you should create a Vertex AI tabular dataset, train an AutoML model to predict customer purchases, deploy the model to a Vertex AI endpoint, and enable feature attributions. Use the 'explain' method to get feature attribution values for each individual prediction. This approach provides fine-grained insights into how each feature impacts the model's predictions, enhancing the interpretability and transparency of the model's decision-making process.

Discussion

6 comments
Sign in to comment
BlehMaksOption: B
Jan 14, 2024

B is correct

fitri001Option: B
Apr 21, 2024

Individual Prediction Explanation: Vertex AI feature attributions provide insights into how each feature (e.g., days_since_last_purchase, average_purchase_frequency) contributes to a specific prediction for a customer-product combination. This allows you to understand the rationale behind the model's prediction for each instance. AutoML Convenience: AutoML simplifies model training without extensive configuration.

fitri001
Apr 21, 2024

A. BigQuery ML with Boosted Trees: While BigQuery ML can build boosted tree models, interpreting individual predictions by inspecting partition rules can be cumbersome and less intuitive compared to feature attributions. C. BigQuery ML Logistic Regression: Logistic regression coefficients indicate feature importance, but they don't directly explain how a specific feature value influences a single prediction. D. L1 Regularization: L1 regularization can help identify potentially unimportant features during training, but it doesn't directly explain individual predictions.

pikachu007Option: B
Jan 13, 2024

Individual prediction interpretability: Feature attributions specifically address the need to understand how features contribute to individual predictions, providing fine-grained insights. Vertex AI integration: Vertex AI offers seamless integration of feature attributions with AutoML models, simplifying the process. Model flexibility: AutoML can explore various model architectures, potentially finding the most suitable one for this task, while still providing interpretability.

36bdc1eOption: B
Jan 13, 2024

B loca interpretability we Use the "explain" method to get feature attribution values for each individual prediction.

ddoggOption: B
Feb 1, 2024

Vertex AI feature attributions: This is the most direct approach. By enabling feature attributions, you get explanations for each prediction, highlighting how individual features contribute to the model's output. This is crucial for understanding specific customer purchase predictions.

LaxmanTiwariOption: B
Jun 30, 2024

" simplest approach", the option B is the best choice.