Question 6 of 800

A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.

Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)

    Correct Answer: C, D

    BD

    Kinesis Data Streams and Kinesis Client Library (KCL) ג€" Data from the data source can be continuously captured and streamed in near real-time using Kinesis

    Data Streams. With the Kinesis Client Library (KCL), you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental views and downstream analysis. Kinesis Data Analytics ג€" This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL. This enables customers to gain actionable insight in near real-time from the incremental stream before storing it in Amazon S3.

    Reference:

    https://d1.awsstatic.com/whitepapers/lambda-architecure-on-for-batch-aws.pdf

Question 7 of 800

A company's application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.

What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?

    Correct Answer: C

    The application experiences a predictable spike in CPU utilization on the first day of every month at midnight due to the month-end financial calculation batch. To handle this predictable workload and avoid downtime, configuring an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the best solution. Scheduled scaling allows you to automatically increase the number of EC2 instances just before the spike occurs, ensuring that additional resources are available to handle the increased load. This proactive approach prevents the application from slowing down due to immediate 100% CPU utilization.

Question 8 of 800

A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.

Which architecture should the solutions architect implement? (Choose two.)

    Correct Answer: B, E

    To make the web application more resilient to periodic increases in request rates, adding an Aurora Replica and an Amazon CloudFront distribution would be effective. Aurora Replicas can offload read traffic from the primary database, thereby handling increased read request rates and enhancing the database's scalability and availability. An Amazon CloudFront distribution can cache content at edge locations, reducing the load on the EC2 instances behind the Application Load Balancer by serving cached content quickly and efficiently, which reduces latency and improves user experience. This combination ensures both the database and the web content delivery can handle increased traffic more resiliently.

Question 9 of 800

An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.

What should the solutions architect do to separate the read requests from the write requests?

    Correct Answer: C

    C

    Amazon RDS Read Replicas -

    Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as

    Amazon Aurora.

    For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source

    DB instance. It then uses the engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance.

    Amazon RDS replicates all databases in the source DB instance.

    Amazon Aurora further extends the benefits of read replicas by employing an SSD-backed virtualized storage layer purpose-built for database workloads. Amazon

    Aurora replicas share the same underlying storage as the source instance, lowering costs and avoiding the need to copy data to the replica nodes. For more information about replication with Amazon Aurora, see the online documentation.

    Reference:

    https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html https://aws.amazon.com/rds/features/read-replicas/

Question 10 of 800

A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity.

Which solution will meet these requirements?

    Correct Answer: C

    The suitable solution involves using AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. AWS Snowball is optimal for one-time data transfers involving large volumes of data (50 TB per application in this case), mitigating the need for extended network use and avoiding potentially high online data transfer costs. Following the initial transfer, AWS Direct Connect ensures ongoing network connectivity with consistent throughput, meeting the requirement for secure connections as it provides private, dedicated connections to AWS. This combination is effective in satisfying both the migration timeframe and the need for stable, secure connectivity afterward.