Exam DOP-C02 All QuestionsBrowse all questions from this exam
Question 201

A company recently deployed its web application on AWS. The company is preparing for a large-scale sales event and must ensure that the web application can scale to meet the demand.

The application's frontend infrastructure includes an Amazon CloudFront distribution that has an Amazon S3 bucket as an origin. The backend infrastructure includes an Amazon API Gateway API, several AWS Lambda functions, and an Amazon Aurora DB cluster.

The company's DevOps engineer conducts a load test and identifies that the Lambda functions can fulfil the peak number of requests. However, the DevOps engineer notices request latency during the initial burst of requests. Most of the requests to the Lambda functions produce queries to the database. A large portion of the invocation time is used to establish database connections.

Which combination of steps will provide the application with the required scalability? (Choose three.)

    Correct Answer: B, C, F

    To provide the application with the required scalability during the large-scale sales event, the following steps are necessary: Configuring a higher provisioned concurrency for the Lambda functions ensures that there are pre-initialized execution environments ready to handle bursts of traffic, thus reducing cold start latency. Converting the DB cluster to an Aurora global database and adding additional Aurora Replicas in AWS Regions based on the locations of the company's customers will improve database response times and availability. Using Amazon RDS Proxy to manage database connections will improve efficiency by pooling and sharing connections, thereby reducing the time spent establishing new connections and enhancing overall performance.

Discussion
WhyIronManOptions: BCF

A. this doesn't directly address the database connection issue and there will be moments were you will be not using it, so spending money B. correct, Configure a higher provisioned concurrency for the Lambda functions: This ensures that Lambda instances are ready to handle bursts of traffic, reducing cold start latency. C. Is correct, if they want to read only D. is wrong because it says "... into the function handlers..." while best practices say to do it OUTSIDE the function handlers. Starting NEW CONNECTIONS is bad thing. F. Is correct, it is a best practice

WhyIronMan

Also, please notice that "The company's DevOps engineer conducts a load test and identifies that the Lambda functions can fulfil the peak number of requests."

Jay_2pt0_1

Glad I read this. I read D wrong the first time.

DanShoneOptions: BDF

A - Provisioned concurrency – This is the number of pre-initialized execution environments allocated to your function https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html D - Initialize SDK clients and database connections outside of the function handler https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html F - RDS Proxy improves scalability by pooling and sharing database connections https://aws.amazon.com/rds/proxy/faqs/?nc=sn&loc=4

kyuhuckOptions: BDF

Configuring a higher reserved concurrency for the Lambda functions (Option A) ensures that a specific number of Lambda instances are available for your function, but it doesn't address the cold start issue as effectively as provisioned concurrency, nor does it directly address the database connection overhead. Therefore, the most effective combination of steps to provide the required scalability and address the identified issue would be Options B (Provisioned Concurrency), F (Amazon RDS Proxy), and a revised understanding of D that focuses on optimizing connection management for efficiency.

dkp

BCF B.Configure a higher provisioned concurrency for the Lambda functions: This will help in maintaining a set number of initialized Lambda instances, reducing cold starts, and providing better scalability. C. Convert the DB cluster to an Aurora global database: This will help in reducing database connection latency for global users by replicating Aurora across multiple regions. F. Use Amazon RDS Proxy to create a proxy for the Aurora database: This will manage database connections efficiently with connection pooling, reducing the time to establish new connections and improving database interaction efficiency.

Shasha1

BCF referance: https://repost.aws/knowledge-center/lambda-cold-start

kyuhuckOptions: ABF

B. Configure a higher provisioned concurrency for the Lambda functions: This ensures that Lambda instances are ready to handle bursts of traffic, reducing cold start latency. F. Use Amazon RDS Proxy to create a proxy for the Aurora database: This directly addresses the issue of database connection overhead, significantly reducing latency by pooling and reusing connections. A. Configure a higher reserved concurrency for the Lambda functions (optional based on specific needs): While this doesn't directly address the database connection issue, it ensures that enough Lambda instances are available to handle the application load, complementing the benefits of provisioned concurrency and RDS Proxy.

thanhnv142Options: BDF

BDF are correct: A: <the Lambda functions can fulfil the peak number of requests> means we dont need to increase this B: correct C: irrelevant D and F: both correct, handling the connection issue

sejarOptions: ABF

D is bad pratice as mentioned here. https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html#function-code:~:text=Initialize%20SDK%20clients%20and%20database%20connections%20outside%20of%20the%20function%20handler C - unsure if that helps, if the Lambda function is not replicated to other regions.

dznOptions: ABF

D is bad practice in this situation. Connecting DBs in global scope and using RDS Proxy can further improve performance.

Chelseajcole

BF. Reserved concurrency – This represents the maximum number of concurrent instances allocated to your function. When a function has reserved concurrency, no other function can use that concurrency. Configuring reserved concurrency for a function incurs no additional charges. Provisioned concurrency – This is the number of pre-initialized execution environments allocated to your function. These execution environments are ready to respond immediately to incoming function requests. Configuring provisioned concurrency incurs additional charges to your AWS account. For the query, it is the connection issue, so try to connect to a different endpoint

trungtdOptions: BCF

The person who chose D doesn't understand Lambda at all

GomerOptions: BCF

A. (NO) "Lambda functions can fulfil the peak number of requests." B.(YES) "The number of pre-initialized execution environments allocated to a function. These execution environments are ready to respond immediately to incoming function requests." C.(YES) Chosen in part by process of elimination because neither "A" or "D" is correct. D. (NO) Declaratons "outside of the function's handler method remain initialized" "when the function is invoked again." "if your Lambda function establishes a database connection" "the original connection is used in subsequent invocations." F.(YES) Chosen in part by process of elimination because neither "A" or "D" is correct.

Ola2234

BDF Option A is a waste of resources Option D is not practicable

c3518fc

but you choose option B

ogerber

ABF, 100%

Ola2234

Why would you want to configure both Reserved and Provisioned concurrency at the same time? would that not amount to a waste of resurces?

Ramdi1Options: CDF

D: By moving connection initialization into the function handler, you avoid the cold start penalty encountered when a new Lambda instance is spun up. Each request can establish a fresh connection, reducing latency during the initial burst. F: RDS Proxy creates a connection pool, eliminating the need for each Lambda invocation to establish a new connection. Reusing connections significantly reduces request latency, especially for short-lived interactions. C: Aurora Global Database distributes data across multiple regions, improving performance for users in different locations. Adding replicas provides additional read capacity, increasing overall database scalability.

Ramdi1

While the other options have some merit: A & B: Increasing reserved/provisioned concurrency might help, but it has ongoing costs and might not be optimal for unpredictable surges. E: CloudFront primarily improves content delivery latency, not database-related delays.