Exam DEA-C01 All QuestionsBrowse all questions from this exam
Question 70

A company stores petabytes of data in thousands of Amazon S3 buckets in the S3 Standard storage class. The data supports analytics workloads that have unpredictable and variable data access patterns.

The company does not access some data for months. However, the company must be able to retrieve all data within milliseconds. The company needs to optimize S3 storage costs.

Which solution will meet these requirements with the LEAST operational overhead?

    Correct Answer: D

    The most suitable solution for the company is the S3 Intelligent-Tiering storage class using the default access tier. S3 Intelligent-Tiering automatically moves data between two access tiers based on changing access patterns without the need for manual intervention, making it ideal for scenarios with unpredictable and variable data access patterns. The default access tier involves two tiers: frequent access and infrequent access. Data that is accessed less frequently is moved to the infrequent access tier, which is more cost-effective. This approach ensures that the data is always readily available within milliseconds when needed, while also optimizing storage costs effectively with minimal operational overhead.

Discussion
GiorgioGssOption: D

Although C is more cost-effective, because of "must be able to retrieve all data within milliseconds" will go with D

arvehisa

The correct answer may be D. Intelligent tiering's default access tier is: 1. accessed less than 30 days: frequent access tier 2. not accessed in 30-90 days: Infrequent Access tier 3. not accessed more than 90 days: Archive Instant Access tier Other tiers require more retrieve time need activation. https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html

helpawsOption: C

Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds

rralucard_Option: D

Option D, using S3 Intelligent-Tiering with the default access tier, will meet the requirements best. It provides a hands-off approach to storage cost optimization while ensuring that data is available for analytics workloads within the required timeframe.

androloginOption: D

Based on this docs https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html D will be appropriate as it allows for instant retrieval

rpwagsOption: D

Staying with "D"... The Amazon S3 Glacier Deep Archive storage class is designed for long-term data archiving where data retrieval times are flexible. It does not offer millisecond retrieval times. Instead, data retrieval from S3 Glacier Deep Archive typically takes 12 hours or more. For millisecond retrieval times, you would use the S3 Standard, S3 Standard-IA, or S3 One Zone-IA storage classes, which are designed for frequent or infrequent access with low latency.

raghumvjOption: D

I am confused with C or D

chris_spencerOption: C

C is correct. "Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds." https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/

tgv

But C doesn't say anything about Instant Retrieval.

Christina666Option: D

least operation overhead, D

kj07

A few remarks: Data should be retrieved in ms. This means all the options with Glacier are wrong: BC For D how you can set the S3 intelligent-Tiering if the current class is Standard? I guess you need a lifecycle policy. Which leaves only A as an option. Thoughts?

damaldon

D. is correct

Felix_G

Option C. Use S3 Intelligent-Tiering. Activate the Deep Archive Access tier. By using S3 Intelligent-Tiering and activating the Deep Archive Access tier, the company can optimize S3 storage costs with minimal operational overhead. S3 Intelligent-Tiering automatically moves objects between four access tiers, including the Deep Archive Access tier, based on changing access patterns and cost optimization. This eliminates the need for manual lifecycle policies and constant refinement, as the storage class is adjusted automatically based on data access patterns, resulting in cost savings while ensuring quick access to all data when needed.