Exam SAP-C02 All QuestionsBrowse all questions from this exam
Question 41

A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table.

A solutions architect needs to implement a solution to minimize the cost of the table.

Which solution will meet these requirements?

    Correct Answer: A

    The most appropriate solution to minimize the cost of the DynamoDB table is to use AWS Application Auto Scaling to increase capacity during the peak period and purchase reserved RCUs and WCUs to match the average load. This approach utilizes reserved capacity for the consistent average load, which is more cost-effective, and leverages auto-scaling to handle the known peak periods, ensuring the table can accommodate higher traffic without incurring the higher costs associated with on-demand capacity.

Discussion
zhangyu20000

A is correct. On demand mode is for unknown load pattern, auto scaling is for know burst pattern

dqwsmwwvtgxwkvgcvc

How AWS Application Auto Scaling scale the read/write performance of DynamoDB?

tannh

You can scale DynamoDB tables and global secondary indexes using target tracking scaling policies and scheduled scaling. https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-dynamodb.html

AimarLeo

But the pattern here is known.. 4 hours peak time etc.. not sure if that would be the write answer

ccortOption: A

A on-demand prices can be 7 times higher, given the options it is better to have reserved WCU and RCU and auto scale in the given schedule

mav3r1ckOption: B

Considering the application's need to handle a peak load that is double the average and the fact that the workload is write-heavy, option B (Configure on-demand capacity mode for the table) is the most suitable solution. It directly addresses the variability in workload without requiring upfront capacity planning or additional management overhead, thus likely providing the best cost optimization for this scenario. On-demand capacity mode eliminates the need to scale resources manually or through Auto Scaling and ensures that you only pay for the write and read throughput you consume.

mav3r1ck

A. AWS Application Auto Scaling with Reserved Capacity Pros: Auto Scaling allows you to automatically adjust the provisioned throughput to meet demand, and purchasing reserved RCUs and WCUs can reduce costs for the capacity you know you'll consistently use. Cons: This option might not be as cost-effective for workloads with significant variability and a high write-to-read ratio, especially if the peak load is much higher than the average load. Reserved capacity benefits consistent usage patterns, but the peak load being double the average may not be fully optimized here.

mav3r1ck

B. On-demand Capacity Mode Pros: On-demand capacity mode is ideal for unpredictable workloads because it automatically scales to accommodate the load without provisioning. You pay for what you use without managing capacity planning. This mode is particularly suitable for the described scenario where the load spikes significantly and unpredictably. Cons: While potentially more expensive per unit than provisioned capacity with auto-scaling, it eliminates the risk of over-provisioning or under-provisioning.

kz407Option: A

A is badly worded however, because it says "application" autoscaling. We are not talking about that here. Either it should be reworded as "DynamoDB autoscaling" for the answer to be correct. On-demand capacity mode is for unknown read/write patterns. Since the load change patterns are known, anything that involves on-demand capacity modes can be eliminated (hence not B). DAX is a caching service deployed in front of DynamoDB. It is geared towards "performance at scale". Problem in the use case, is to optimize table costs. Using DAX will incur additional costs. Hence anything that involves DAX (C and D) can also be eliminated.

Malcnorth59

I initially thought the same but the AWS definition of Application autoscaling listed here includes DynamoDB: https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html

anubha.agrahariOption: A

https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/#:~:text=You%20can%20approximate%20a%20blend,save%20money%20as%20reserved%20capacity

ninomfr64Option: A

A -> You can scale DynamoDB tables and global secondary indexes using target tracking scaling policies and scheduled scaling. In this I would go for scheduled scaling. https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-dynamodb.html B -> on-demand capacity mode is for unknown workload, this is not the case C -> DAX come with costs and it helps with reads, while here we have a more write-bound workload D -> See B and C comments

Simon523Option: A

Reserved capacity is available for single-Region, provisioned read and write capacity units (RCU and WCU) on DynamoDB tables including global and local secondary indexes. You cannot purchase reserved capacity for replicated WCUs (rWCUs).

vn_hunglvOption: A

Tôi chọn A

zolthar_zOption: A

Auto-scaling is for known traffic pattern, On-demand is for unknown traffic patter and also could be more expensive

Malcnorth59Option: A

AWS documentation suggests A is correct: https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html

Kubernetes

A is correct. The focus is minimizing the cost of tables.

8608f25Option: B

Option B is the most cost-effective solution for workloads with significant fluctuations and unpredictable access patterns. The on-demand capacity mode automatically adjusts the table’s throughput capacity as needed in response to actual traffic, eliminating the need to manually configure or manage capacity. This mode is ideal for applications with irregular traffic patterns, such as a significant peak once a week, because you only pay for the read and write requests your application performs, without having to provision throughput in advance. Option B directly addresses the requirement to minimize costs associated with fluctuating loads, especially when the load significantly exceeds the average only during a brief period, by leveraging DynamoDB’s on-demand capacity mode to automatically scale and pay only for what is used.

igor12ghsj577Option: A

I think there is mistake in answer A, and it should be DynamoDb auto scaling instead of application autos calling. Or application and dynamoDB auto scaling.

igor12ghsj577

Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity.

jpa8300Option: D

I choose option D, because DAX is not only an accelerator for the Reads, it also cache releasing a lot of load from the DB.

severlightOption: A

we use scheduled scaling here

whenthanOption: A

https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/#:~:text=You%20can%20approximate%20a%20blend,save%20money%20as%20reserved%20capacity.

awsent

Correct Answer: A Application auto scaling can be used for scheduled scaling for DynamoDB tables and GSIs https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html