Question 6 of 800

A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.

Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)

    Correct Answer: C, D

    For near-real-time data processing with minimal effort and operational overhead, Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose are the most cost-effective solutions. Amazon Kinesis Data Streams will continuously capture and stream the data in real-time, while Amazon Kinesis Data Firehose will handle the near-real-time data transformation and loading into Amazon Redshift. This combination allows for seamless integration of streaming data with minimal manual intervention and operational complexity.

Question 7 of 800

A company's application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.

What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?

    Correct Answer: C

    The application experiences a predictable spike in CPU utilization on the first day of every month at midnight due to the month-end financial calculation batch. To handle this predictable workload and avoid downtime, configuring an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the best solution. Scheduled scaling allows you to automatically increase the number of EC2 instances just before the spike occurs, ensuring that additional resources are available to handle the increased load. This proactive approach prevents the application from slowing down due to immediate 100% CPU utilization.

Question 8 of 800

A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.

Which architecture should the solutions architect implement? (Choose two.)

    Correct Answer: B, E

    To make the web application more resilient to periodic increases in request rates, adding an Aurora Replica and an Amazon CloudFront distribution would be effective. Aurora Replicas can offload read traffic from the primary database, thereby handling increased read request rates and enhancing the database's scalability and availability. An Amazon CloudFront distribution can cache content at edge locations, reducing the load on the EC2 instances behind the Application Load Balancer by serving cached content quickly and efficiently, which reduces latency and improves user experience. This combination ensures both the database and the web content delivery can handle increased traffic more resiliently.

Question 9 of 800

An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.

What should the solutions architect do to separate the read requests from the write requests?

    Correct Answer: C

    To separate read requests from write requests in an Amazon Aurora Multi-AZ deployment, the best approach is to create a read replica and modify the application to use the appropriate endpoint. Aurora uses a storage model that allows multiple read replicas to share the same underlying data as the primary instance without needing to create multiple sets of data. This setup enables the read replicas to handle read traffic effectively, reducing the load on the primary instance and minimizing latency for write operations. This method leverages Aurora's architecture designed for high availability and scalability in read-heavy workloads.

Question 10 of 800

A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity.

Which solution will meet these requirements?

    Correct Answer: C

    The suitable solution involves using AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. AWS Snowball is optimal for one-time data transfers involving large volumes of data (50 TB per application in this case), mitigating the need for extended network use and avoiding potentially high online data transfer costs. Following the initial transfer, AWS Direct Connect ensures ongoing network connectivity with consistent throughput, meeting the requirement for secure connections as it provides private, dedicated connections to AWS. This combination is effective in satisfying both the migration timeframe and the need for stable, secure connectivity afterward.