DP-200 Exam QuestionsBrowse all questions from this exam

DP-200 Exam - Question 17


You are developing a data engineering solution for a company. The solution will store a large set of key-value pair data by using Microsoft Azure Cosmos DB.

The solution has the following requirements:

✑ Data must be partitioned into multiple containers.

✑ Data containers must be configured separately.

✑ Data must be accessible from applications hosted around the world.

✑ The solution must minimize latency.

You need to provision Azure Cosmos DB.

Which three actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Show Answer
Correct Answer: CDE

To meet the requirements of partitioning data into multiple containers, configuring data containers separately, ensuring global data accessibility, and minimizing latency, three actions should be taken. First, configuring table-level throughput allows each data container to be set with its own throughput, hence meeting the need for separate configuration. Second, manually adding regions to replicate the data globally ensures the data is copied to various locations, thereby minimizing latency worldwide. Third, enabling multi-region writes in the Azure Cosmos DB account with the Azure Table API supports global write operations, reducing latency by allowing multiple regions to handle write operations.

Discussion

6 comments
Sign in to comment
JohnCrawford
Apr 6, 2021

The answers are C, D and E. • C. Configure table-level throughput. Requirements state that containers must be configured separately. • D. Replicate the data globally by manually adding regions to the Azure Cosmos DB account. By adding extra regions our data is automatically copied to those regions reducing latency. • E. Provision an Azure Cosmos DB account with the Azure Table API. Enable multi-region writes. By enabling multi-region writes this also reduces latency since we don't have a single master database, but rather would be implementing a multi-master model.

Maky2365
May 7, 2021

question doesn't mention the requirement for multi-region write, So as per my understanding answer should be B, C, D Please suggest if my understanding is correct

AnilKJ
Apr 7, 2021

B,D,E is the answer

VeeraSekhar
May 5, 2021

https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput From the above link CosmosDB allows throughput at two levels Azure Cosmos containers Azure Cosmos databases Hence B,D,E is correct answer. Sometimes we have to choose answer from list of provided answers.

cadio30
May 6, 2021

Propose solution is C, D and E. When we say Azure Cosmos containers we are pertaining to the "CONTAINERS" of what we chose in the creation of DB. The option C pertains to configuration in "Cassandra API" in which the name of the container is "TABLE" and there are also other containers such as Container for SQL API, Collection for Mongo DB, graph for Gremlin API and Table for Table API Reference: https://azure.microsoft.com/en-us/blog/sharing-provisioned-throughput-across-multiple-containers-in-azure-cosmosdb/

cadio30
May 6, 2021

Propose solution is C, D and E. When we say Azure Cosmos containers we are pertaining to the "CONTAINERS" of what we chose in the creation of DB. The option C pertains to configuration in "Cassandra API" in which the name of the container is "TABLE" and there are also other containers such as Container for SQL API, Collection for Mongo DB, graph for Gremlin API and Table for Table API Reference: https://azure.microsoft.com/en-us/blog/sharing-provisioned-throughput-across-multiple-containers-in-azure-cosmosdb/

princy18
Apr 5, 2021

There must be 3 answers, any one knows what are those.

Devendra00023
Apr 7, 2021

Answer is CDE

Wendy_DK
Apr 14, 2021

Answer is CDE

massnonn
Nov 20, 2021

Cosmos db support multi region write so C-D-E