Exam AI-102 All QuestionsBrowse all questions from this exam
Question 213

DRAG DROP -

You are using a Language Understanding service to handle natural language input from the users of a web-based customer agent.

The users report that the agent frequently responds with the following generic response: "Sorry, I don't understand that."

You need to improve the ability of the agent to respond to requests.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place:

    Correct Answer:

    Step 1: Add prebuilt domain models as required.

    Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later.

    Note: Language Understanding (LUIS) provides prebuilt domains, which are pre-trained models of intents and entities that work together for domains or common categories of client applications.

    The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app.

    Step 2: Enable active learning -

    To enable active learning, you must log user queries. This is accomplished by calling the endpoint query with the log=true querystring parameter and value.

    Step 3: Train and republish the Language Understanding model

    The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures endpoint queries and selects user's endpoint utterances that it is unsure of. You review these utterances to select the intent and mark entities for these real-world utterances. Accept these changes into your example utterances then train and publish. LUIS then identifies utterances more accurately.

    Incorrect Answers:

    Enable log collection by using Log Analytics

    Application authors can choose to enable logging on the utterances that are sent to a published application. This is not done through Log Analytics.

    Reference:

    https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances#log-user-queries-to-enable-active-learning https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-prebuilt-model

Discussion
g2000

1. Enable active learning 2. Validate the utterances logged for review and modify the model 3. Train and republish the language understanding model not sure how the pre built model can improve accuracy....

Eltooth

Agreed

ninjia

Agreed

GigaCaster

But the question isn't asking for accuracy but rather making it able to give a response instead of the current "Sorry, I don't understand that." response the customers are receiving. Thus the answer wants a short-term quick fix so that the team can work on the accuracy later, leading to a pre-built model being used.

rdemontis

agree with you

rdemontis

to improve the model it's better to review the utterances logged by user feedback and then change the model. In this way we could adapt it on a real utterances basis

evangelist

given answer is NOT Correct! why: Enable Active Learning: LUIS's Active Learning feature automatically identifies vague or low-confidence user inputs that may require further training. Review and Add Utterances Logged for Review: Utterances collected through Active Learning are marked for review. Regularly reviewing and categorizing these utterances into the relevant intents is a crucial step in enhancing model performance. Train and Republish the Language Understanding Model: After adding new utterances and potentially adjusting intents and entities, you need to retrain the LUIS model to incorporate these updates.

Eltooth

1. Enable active learning 2. Validate the utterances logged for review and modify the model 3. Train and republish the language understanding model

varinder82

Final Answer: 1. Enable active learning 2. Validate the utterances logged for review and modify the model 3. Train and republish the language understanding model

etellez

Copilot says To improve the ability of the agent to respond to requests, you should perform the following actions in sequence: Enable log collection using Log Analytics: This will allow you to collect and analyze the utterances that the agent is not understanding. Enable active learning: Active learning will help the model to learn from the utterances it didn't understand and improve over time. Train and republish the language understanding model: After enabling active learning and collecting more data, you should retrain the model and publish it again. This will incorporate the new learnings into the model and should improve its ability to understand user requests.

etellez

Today the answer has been different Enable active learning. Validate the utterances logged for review and modify the model. Train and republish the Language Understanding model.

SAMBIT

a) Add prebuilt domain models as required. Validate the utterances logged for review and modify the model. (2) b) Migrate authoring to an Azure resource authoring key. c) Enable active learning. d) Enable log collection by using Log Analytics. (1) e) Train and republish the Language (3) f) Understanding model.

nanaw770

1. Enable active learning 2. Validate the utterances logged for review and modify the model 3. Train and republish the language understanding model

zellck

Same as Question 4. https://www.examtopics.com/discussions/microsoft/view/57673-exam-ai-102-topic-5-question-4-discussion