Exam DOP-C02 All QuestionsBrowse all questions from this exam
Question 267

A company's application uses a fleet of Amazon EC2 On-Demand Instances to analyze and process data. The EC2 instances are in an Auto Scaling group. The Auto Scaling group is a target group for an Application Load Balancer (ALB). The application analyzes critical data that cannot tolerate interruption. The application also analyzes noncritical data that can withstand interruption.

The critical data analysis requires quick scalability in response to real-time application demand. The noncritical data analysis involves memory consumption. A DevOps engineer must implement a solution that reduces scale-out latency for the critical data. The solution also must process the noncritical data.

Which combination of steps will meet these requirements? (Choose two.)

    Correct Answer: B, D

    For the critical data that requires quick scalability and cannot tolerate interruptions, the best choice is to modify the existing Auto Scaling group to create a warm pool of On-Demand Instances. This ensures that instances are ready to be quickly started, reducing scale-out latency. For the noncritical data that can withstand interruptions and involves memory consumption, creating a second Auto Scaling group using Spot Instances is appropriate. Configuring the new group's launch template to install the unified Amazon CloudWatch agent for custom memory utilization metrics will help manage scaling based on memory use. Adding the new group as a target for the ALB and modifying the application to use these two target groups meet the requirement of processing both types of data.

Discussion
tgvOptions: BD

---> B D

trungtdOptions: BD

AWS Auto Scaling does not provide a predefined memory utilization metric type

TEC1Options: BE

On-Demand Instances: For critical data that cannot tolerate interruption, On-Demand Instances are reliable and provide the required stability without the risk of termination Spot Instances: Utilising Spot Instances for noncritical data processing can significantly reduce costs since these workloads can tolerate interruptions. This combination ensures that the critical data analysis benefits from reduced scale-out latency and reliability, while noncritical data processing leverages cost-effective Spot Instances and is scaled based on memory usage.