AI-100 Exam QuestionsBrowse all questions from this exam

AI-100 Exam - Question 160


Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are deploying an Azure Machine Learning model to an Azure Kubernetes Service (AKS) container.

You need to monitor the scoring accuracy of each run of the model.

Solution: You modify the scoring file.

Does this meet the goal?

Show Answer
Correct Answer: A

To monitor the scoring accuracy of each run of the model in an Azure Kubernetes Service (AKS) container, you need to modify the scoring file to include the necessary code for logging predictions. This can involve adding logging capabilities to the scoring script, such as using the Azure Machine Learning SDK's ModelDataCollector to log predictions and metrics. By doing so, you can track and monitor the scoring accuracy effectively.

Discussion

5 comments
Sign in to comment
BwandoWando
May 29, 2021

This is a YES if you download the actual code that Microsoft has shared in GITHUB for DP-100 review material https://microsoftlearning.github.io/mslearn-dp100/ you can go to NOTEBOOK "16 - Monitor a Model.ipynb" You need to MODIFY the scoring script and you need to add snippet(s) code of that after you get the prediction of a model you are monitoring, you use application insights to log the prediction. So, answer is yes.

Arockia
May 19, 2021

The answer is Yes. Open the scoring file and need to add the following code at the top of the file: from azureml.monitoring import ModelDataCollector

OzgurG
Apr 3, 2021

You configure Azure Application Insights.

claudiapatricia777
Nov 17, 2021

This is a NO, the solution is to use a DataDrift Monitor: Over time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as data drift, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary. https://github.com/MicrosoftLearning/mslearn-dp100/blob/main/17%20-%20Monitor%20Data%20Drift.ipynb

rveney
Jun 21, 2023

B. No Modifying the scoring file does not meet the goal of monitoring the scoring accuracy of each run of the model. Instead, you can use Azure Machine Learning's monitoring capabilities to monitor the scoring accuracy of each run of the model. You can use the Azure Machine Learning SDK to enable monitoring for your models and configure the metrics to monitor. You can also use Azure Machine Learning's Application Insights integration to monitor the performance and usage of your models. You can use the Azure Machine Learning SDK to log custom metrics and events from your scoring script, and you can use the Azure Machine Learning Studio to view and analyze the logs. You can also use Azure Machine Learning's model explainability features to understand how your model is making predictions and identify potential issues with the model