Implementing an Azure Data Solution

Here you have the best Microsoft DP-200 practice exam questions

  • You have 228 total questions to study from
  • Each page has 5 questions, making a total of 46 pages
  • You can navigate through the pages using the buttons at the bottom
  • This questions were last updated on November 17, 2024
Question 1 of 228

You are a data engineer implementing a lambda architecture on Microsoft Azure. You use an open-source big data solution to collect, process, and maintain data.

The analytical data store performs poorly.

You must implement a solution that meets the following requirements:

✑ Provide data warehousing

✑ Reduce ongoing management activities

✑ Deliver SQL query responses in less than one second

You need to create an HDInsight cluster to meet the requirements.

Which type of cluster should you create?

    Correct Answer: A

    D

    Lambda Architecture with Azure:

    Azure offers you a combination of following technologies to accelerate real-time big data analytics:

    1. Azure Cosmos DB, a globally distributed and multi-model database service.

    2. Apache Spark for Azure HDInsight, a processing framework that runs large-scale data analytics applications.

    3. Azure Cosmos DB change feed, which streams new data to the batch layer for HDInsight to process.

    4. The Spark to Azure Cosmos DB Connector

    Note: Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch processing and stream processing methods, and minimizing the latency involved in querying big data.

    References:

    https://sqlwithmanoj.com/2018/02/16/what-is-lambda-architecture-and-what-azure-offers-with-its-new-cosmos-db/

Question 2 of 228

DRAG DROP -

You develop data engineering solutions for a company. You must migrate data from Microsoft Azure Blob storage to an Azure SQL Data Warehouse for further transformation. You need to implement the solution.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place:

    Correct Answer:

    Step 1: Provision an Azure SQL Data Warehouse instance.

    Create a data warehouse in the Azure portal.

    Step 2: Connect to the Azure SQL Data warehouse by using SQL Server Management Studio

    Connect to the data warehouse with SSMS (SQL Server Management Studio)

    Step 3: Build external tables by using the SQL Server Management Studio

    Create external tables for data in Azure blob storage.

    You are ready to begin the process of loading data into your new data warehouse. You use external tables to load data from the Azure storage blob.

    Step 4: Run Transact-SQL statements to load data.

    You can use the CREATE TABLE AS SELECT (CTAS) T-SQL statement to load the data from Azure Storage Blob into new tables in your data warehouse.

    References:

    https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/sql-data-warehouse/load-data-from-azure-blob-storage-using-polybase.md

Question 3 of 228

You develop data engineering solutions for a company. The company has on-premises Microsoft SQL Server databases at multiple locations.

The company must integrate data with Microsoft Power BI and Microsoft Azure Logic Apps. The solution must avoid single points of failure during connection and transfer to the cloud. The solution must also minimize latency.

You need to secure the transfer of data between on-premises databases and Microsoft Azure.

What should you do?

    Correct Answer: D

    To secure the transfer of data between on-premises databases and Microsoft Azure while avoiding single points of failure and minimizing latency, installing an Azure on-premises data gateway as a cluster at each location is the most appropriate solution. Clustering ensures high availability by allowing multiple gateways to work together to provide continuous data access. If one gateway fails, another in the cluster can take over, thus preventing any disruption in the data transfer process.

Question 4 of 228

You are a data architect. The data engineering team needs to configure a synchronization of data between an on-premises Microsoft SQL Server database to

Azure SQL Database.

Ad-hoc and reporting queries are being overutilized the on-premises production instance. The synchronization process must:

✑ Perform an initial data synchronization to Azure SQL Database with minimal downtime

✑ Perform bi-directional data synchronization after initial synchronization

You need to implement this synchronization solution.

Which synchronization method should you use?

    Correct Answer: E

    E

    SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple SQL databases and SQL

    Server instances.

    With Data Sync, you can keep data synchronized between your on-premises databases and Azure SQL databases to enable hybrid applications.

    Compare Data Sync with Transactional Replication

    References:

    https://docs.microsoft.com/en-us/azure/sql-database/sql-database-sync-data

Question 5 of 228

An application will use Microsoft Azure Cosmos DB as its data solution. The application will use the Cassandra API to support a column-based database type that uses containers to store items.

You need to provision Azure Cosmos DB. Which container name and item name should you use? Each correct answer presents part of the solutions.

NOTE: Each correct answer selection is worth one point.

    Correct Answer: B, E

    BE

    B: Depending on the choice of the API, an Azure Cosmos item can represent either a document in a collection, a row in a table or a node/edge in a graph. The following table shows the mapping between API-specific entities to an Azure Cosmos item:

    E: An Azure Cosmos container is specialized into API-specific entities as follows:

    References:

    https://docs.microsoft.com/en-us/azure/cosmos-db/databases-containers-items