Exam SAP-C01 All QuestionsBrowse all questions from this exam
Question 896

A company has an online shop that uses an Amazon API Gateway API, AWS Lambda functions, and an Amazon DynamoDB table provisioned with 900 RCUs.

The API Gateway API receives requests from customers, and the Lambda functions handle the requests. Some of the Lambda functions read data from the

DynamoDB table.

During peak hours, customers are reporting timeout errors and slow performance. An investigation reveals that the Lambda functions that read the DynamoDB table occasionally time out. Amazon CloudWatch metrics show that the peak usage of the DynamoDB table is just below 900 RCUs.

Which solution will resolve this issue MOST cost-effectively?

    Correct Answer: A

    The most cost-effective solution to resolve the issue is to configure the DynamoDB table's read capacity to use auto scaling with default parameters. By enabling auto scaling, the DynamoDB table can automatically adjust its read capacity based on actual usage. This helps ensure that the table has sufficient read capacity to handle traffic during peak hours while minimizing costs during off-peak hours. This approach addresses the root cause of the timeout errors and slow performance without requiring excessive over-provisioning or manual intervention.

Discussion
joanneli77Option: A

It can't exceed 900 RCUs, so "just below 900 RCUs" is "at capacity". Change to auto-scaling. If you were at 896-899, wouldn't you say "woah, that's too close!" or would you say "Must be lambda time-outs?" Best case, you'd still address dynamoDB even if it WAS lambda time-outs.

pixepe

Amazon CloudWatch metrics show that the peak usage of the DynamoDB table is just below 900 RCUs. => There is NO Issue w.r.t DynamoDB So, A,B and D filtered out. ANSWER is B

skywalkerOption: A

Customers are reporting timeout errors and slow performance.. Increasing timemout will caused it even slower. Thus going for A. At least it provide some read performance enhancement.

skywalker

DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term. Thus no harm turn on AutoScaling if the workload is steady and require some bursting in few min. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

caleOption: B

I think it's B.

redipa

The answer is definitely not A. The key words are "with default parameters" Default auto-scaling read parameters have a maximum of 10. If the table is already using 900 RCUs, this would severely lower the resources. 900 -> 10

davidy2020

ChatGPT said: The most cost-effective solution to resolve the issue is option A: Configure the DynamoDB table's read capacity to use auto scaling with default parameters. By enabling auto scaling, the DynamoDB table can automatically adjust its read capacity based on the actual usage, which helps ensure that the table has sufficient read capacity to handle the traffic during peak hours without incurring unnecessary costs during off-peak hours. The other options may alleviate the issue, but they come at the cost of increased provisioned capacity, higher Lambda function timeouts, and additional resources to implement the data replication. By using auto scaling, the company can cost-effectively ensure that their system is able to handle the traffic during peak hours.

KendeOption: B

"B" is the one.

alxjandroleivaOption: B

RCU is not a problem

WhyIronMan

A) chat gpt is right after verifying the message

ggrodskiy

Correct A.

hobokaboboOption: A

Its about what is most "cost effective". So question asks what has most impact on costs? I would argue its A because while peak is above 900 its way below most of the time ( RCU overprovisioned). The may give cost savings while also solving the issue.

evargasbrzOption: A

I'll go with A "Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling." https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html