A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
Correct Answer: B
To migrate 70 TB of video files from on-premises storage to Amazon S3 while using the least possible network bandwidth, using AWS Snowball Edge is the most suitable option. AWS Snowball Edge is a physical data transport solution that can transfer large amounts of data securely and quickly without utilizing internet bandwidth. The process involves receiving the Snowball Edge device at the premises, transferring the data to the device using the Snowball Edge client, and then shipping the device back to AWS, where the data is imported into Amazon S3. This approach minimizes network bandwidth usage and ensures a rapid transfer given the volume of data.
A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.
Which solution meets these requirements?
Correct Answer: D
The solution that best meets the company's requirements of ingesting and quickly consuming up to 100,000 messages each second while decoupling the solution and increasing scalability is to publish messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. This configuration provides a scalable and decoupled architecture as Amazon SNS can handle high message throughput and enables multiple consumer applications to process the messages in parallel through SQS. This approach ensures that the system can handle drastic variations in message volume efficiently.
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
Correct Answer: B
To address the variable workloads and maximize both resiliency and scalability, the best approach involves using Amazon Simple Queue Service (Amazon SQS) as a buffer for the jobs. This decouples the primary server from the compute nodes, making the system more resilient. By implementing the compute nodes with EC2 instances managed by an Auto Scaling group and configuring scaling based on the size of the SQS queue, the system can dynamically adjust the number of instances according to the workload. This ensures that the infrastructure scales out to handle peak loads and scales in during periods of lower activity, providing cost efficiency and enhancing responsiveness to workload changes.
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.
Which solution will meet these requirements?
Correct Answer: B
Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. This solution provides seamless integration with existing SMB file servers and allows for the extension of storage capacity using Amazon S3. Additionally, the S3 Lifecycle policy ensures that data is automatically transitioned to a cost-effective storage class after 7 days, managing storage efficiently while maintaining low-latency access for recently accessed files due to the gateway's local caching capabilities.
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
Correct Answer: B
To ensure that orders are processed in the order they are received, the company should use an Amazon Simple Queue Service (SQS) FIFO (First-In-First-Out) queue. SQS FIFO queues are specifically designed to guarantee the order of message processing and exactly-once delivery. By integrating the API Gateway to send messages to an SQS FIFO queue and configuring the queue to invoke an AWS Lambda function for processing, the company can ensure that order processing maintains the sequence in which the orders were received.