You have an Azure Machine Learning model that is deployed to a web service.
You plan to publish the web service by using the name ml.contoso.com.
You need to recommend a solution to ensure that access to the web service is encrypted.
Which three actions should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Correct Answer: BDE
To ensure that access to the web service is encrypted, you need to take the following steps: First, obtain an SSL certificate as it is necessary to enable HTTPS and encrypt communications (Option B). Next, update the web service to configure it to use the obtained SSL certificate, which involves setting the ssl_enabled parameter to True and providing the SSL certificate and key files (Option D). Finally, update the DNS to point to the web service so that the custom domain name will resolve to the correct IP address, allowing users to access the web service securely via HTTPS (Option E). This ensures that the web service is accessible through the secure HTTPS protocol.
Your company recently deployed several hardware devices that contain sensors.
The sensors generate new data on an hourly basis. The data generated is stored on-premises and retained for several years.
During the past two months, the sensors generated 300 GB of data.
You plan to move the data to Azure and then perform advanced analytics on the data.
You need to recommend an Azure storage solution for the data.
Which storage solution should you recommend?
Correct Answer: C
Azure Blob storage is optimal for large amounts of unstructured data such as logs or files generated by sensors. It is cost-effective for storing vast amounts of data over long periods. Additionally, it integrates well with various Azure services for advanced analytics, making it a suitable choice for performing analytics on the data once it has been moved to Azure.
You plan to design an application that will use data from Azure Data Lake and perform sentiment analysis by using Azure Machine Learning algorithms.
The developers of the application use a mix of Windows- and Linux-based environments. The developers contribute to shared GitHub repositories.
You need all the developers to use the same tool to develop the application.
What is the best tool to use? More than one answer choice may achieve the goal.
Correct Answer: A
The best tool to use in this scenario is Microsoft Visual Studio Code. This tool is cross-platform, meaning it can be used on both Windows and Linux environments, which covers the mixed environments the developers are using. Additionally, Visual Studio Code supports integration with GitHub repositories, allowing developers to easily share and collaborate on code. It also has extensive support for various extensions and tools, including those needed for Azure services and Machine Learning development.
You have several AI applications that use an Azure Kubernetes Service (AKS) cluster. The cluster supports a maximum of 32 nodes.
You discover that occasionally and unpredictably, the application requires more than 32 nodes.
You need to recommend a solution to handle the unpredictable application load.
Which scaling method should you recommend?
Correct Answer: D
Azure Container Instances (ACI) can quickly respond to an increase in demand by provisioning additional compute resources when the existing Azure Kubernetes Service (AKS) cluster reaches its maximum capacity of 32 nodes. ACI allows for elastic bursting, which means that when the AKS cluster's capacity is exceeded, ACI can seamlessly scale out additional pods without the need to manage additional servers, thus effectively handling the unpredictable application load.
You deploy an infrastructure for a big data workload.
You need to run Azure HDInsight and Microsoft Machine Learning Server. You plan to set the RevoScaleR compute contexts to run rx function calls in parallel.
What are three compute contexts that you can use for Machine Learning Server? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Correct Answer: BCE
When deploying an infrastructure for big data workloads on Azure HDInsight, using the RevoScaleR compute contexts to run rx function calls in parallel is essential. The three compute contexts that are appropriate for this scenario are Spark, local parallel, and local sequential. Spark is a common and efficient choice for distributed data processing. Local parallel allows for controlled, distributed computations, enabling parallel execution. Local sequential, while typically used for sequential computations, can still handle parallelized execution when set up correctly. SQL and HBase are not suitable in this context as they either do not support the required parallel processing or are not optimal for big data workloads in HDInsight.