Exam AI-102 All QuestionsBrowse all questions from this exam
Question 23

DRAG DROP -

You need to develop an automated call handling system that can respond to callers in their own language. The system will support only French and English.

Which Azure Cognitive Services service should you use to meet each requirement? To answer, drag the appropriate services to the correct requirements. Each service may be used once, more than once, or not at all. You may need to drag the split bat between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

    Correct Answer:

    Box 1: Text Analytics -

    The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.

    Incorrect Answers:

    Speaker Recognition which accurately verifies and identifies speakers by their unique voice characteristics.

    Box 2: Translator -

    Translator is a cloud-based neural machine translation service that is part of the Azure Cognitive Services family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations.

    Reference:

    https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-overview

Discussion
Khiem

It should be: - Speech to Text with AutoDetectSourceLanguageConfig. It can't be Text Analytics because the input is callers' voice. - Text to Speech: the output is voice.

rdemontis

I agree with you

PeteColag

I agree with your answer. What is missing in this question is an explanation of how the response text (to feed the text to speech) is being generated. I am assuming this would be based on LUIS or CLU?

Toby86

private static async Task Respond(SpeechRecognitionResult result, SpeechConfig speechConfig, Random random) { if (result.Reason.Equals(ResultReason.RecognizedSpeech)) { var client = new TextAnalyticsClient(languageDetectorEndpoint, credentials); DetectedLanguage detectedLanguage = client.DetectLanguage(result.Text);

Eltooth

Speech-to-Text : https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-to-text Text-to-Speech : https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech Both support common languages, including French. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support?tabs=speechtotext

zellck

1. Speech to Text 2. Test to Speech https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-identification?tabs=once&pivots=programming-language-csharp#speech-to-text You use Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text to speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand.

dacchione

Language identification (LID) use cases include: Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. Speech translation when you need to identify the language in an audio source and then translate it to another language. https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-identification?tabs=once&pivots=programming-language-python

james2033

At this time, this question is out of date. See https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/overview . In old time, Text Analytics is correct.

varinder82

Final Answer: 1. Speech to Text 2. Text to Speech

Mehe323

People, read the descriptions, these are describing only parts of the process, not the whole process. 1) DETECT incoming language has got not to do with speech to text, but with text analytics. Yes, speech to text is a part of the whole process but this services TRANSCRIBES and does not do language detection. 2) Respond to caller's language should be Text to speech

Mehe323

IGNORE my comment! Speech to text also does language detection: https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-identification?tabs=once&pivots=programming-language-csharp

SAMBIT

https://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark

HaraTadahisa

1. Speech to Text 2. Text to Speech

reiwanotora

1. Speech to Text 2. Text to Speech

anto69

ChatGPT: speech to text + text to speech

Tempeck

The answer should be Text Analytics and TTS.

nanaw770

1. Speech to Text 2. Test to Speech From Takedajuku perspective, if you study for 4 days and spend 2 days reviewing, you will have a better chance of passing the exam.

audlindr

It can't be Text Analytics. It is an incoming call. It should be Speech to Text https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-identification?tabs=once&pivots=programming-language-csharp And then Text to Speech for the response

suzanne_exam

Speech to text - as it's based on a voice call Text to speech - it's not translator here, as the key thing is that the program needs to respond, not just translate

endeesa

to detect the incoming language you need Text analytics, Speech to text does not have an option to detect language

CauchyLee

POST /cognitiveservices/v1/speechtotext/recognition HTTP/1.1 Ocp-Apim-Subscription-Key: <subscription_key> Content-Type: application/json { "config": { "language": "auto", "enableSeparation": true }, "format": "audio-16khz-128kbitrate-mono-mp3", "audio": <binary_audio_file> }

ziggy1117

A. Speech to text: autodetectsourcelanguageconfig : https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-identification?tabs=once&pivots=programming-language-csharp b. text to speech