SAA-C03 Exam QuestionsBrowse all questions from this exam

SAA-C03 Exam - Question 52


A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, and requires minimum operational overhead.

Which solution will meet these requirements?

Show Answer
Correct Answer: C

The application data must be stored in a standard file system structure that scales automatically, is highly available, and requires minimum operational overhead. Amazon Elastic File System (Amazon EFS) fits these requirements perfectly, as it provides a fully managed, scalable, and highly available file system. By combining this with Amazon EC2 instances in a Multi-AZ Auto Scaling group, the application can achieve high availability and scalability with minimal operational overhead.

Discussion

17 comments
Sign in to comment
ArielSchivoOption: C
Oct 18, 2022

EFS is a standard file system, it scales automatically and is highly available.

masetromain
Oct 12, 2022

I have absolutely no idea... Output files that vary in size from tens of gigabytes to hundreds of terabytes Simit size for a single object: S3 5To TiB https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/ EBS 64 Tib https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html EFS 47.9 TiB https://docs.aws.amazon.com/efs/latest/ug/limits.html

JayBee65
Dec 5, 2022

S3 and EBS are block storage but you are looking to store files, so EFS is the correct option.

Ello2023
Jan 13, 2023

S3 is object storage.

OmegaLambda7XL9
Nov 18, 2023

A lil correction,S3 is Object storage not Block Storage

RBSK
Dec 12, 2022

None meets 100s of TB / file. Bit confusing / misleading

Help2023
Feb 17, 2023

The answer to that is Limit size for a single object: S3, 5TiB is per object but you can have more than one object in a bucket, meaning infinity https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/ EBS 64 Tib is per block of storage https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html EFS 47.9 TiB per file and in the questions its says Files the 's' https://docs.aws.amazon.com/efs/latest/ug/limits.html

cookieMrOption: C
Jun 21, 2023

EFS provides a scalable and fully managed file system that can be easily mounted to multiple EC2. It allows you to store and access files using the standard file system structure, which aligns with the company's requirement for a standard file system. EFS automatically scales with the size of your data. A suggests using ECS for container orchestration and S3 for storage. ECS doesn't offer a native file system storage solution. S3 is an object storage service and may not be the most suitable option for a standard file system structure. B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system access. While EKS can manage containers, it doesn't specifically address the file storage requirements. D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn't inherently offer a scalable file system solution like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.

Mikado211Option: C
Nov 30, 2023

Technically the A could work, ECS is often recommended by AWS in case of minimum operational overhead, and S3 is durable and highly scalable BUT it is not a "traditional" file system structure. In an S3 bucket, there is no real file structure, only files and prefixes that simulate a structure. B is wrong because of EKS which require more management EFS is recommended for minimum operational overhead instead of EBS. So C (EC2 + EFS) is recommended here over D (EC2 + EBS).

leosmal
Nov 24, 2023

The key is Multi-AZ ,EBS does not support it.

pentium75Option: C
Dec 25, 2023

"File system structure" = EFS, which also meets all the other requirements.

NitiATOSOption: C
Jan 27, 2023

I will go with C as If the app is deployed in MultiAZ, computes are different but the Storage needs to be common. EFS is easist way to configure shared storage as compared to SHARED EBS. Hence C Suits the best.

harirkmusa
Feb 12, 2023

standard file system structure is the KEYWORD here, the S3 and EBS are not file based storage. EFS is. so the automatic answer is C

joshnortOption: C
Apr 30, 2023

Keywords: file system structure, scales automatically, highly available, and minimal operational overhead

BmarodiOption: C
Jun 5, 2023

Option C meets the requirements.

miki111
Jul 19, 2023

Option C is the correct answer

TariqKipkemeiOption: C
Aug 7, 2023

Standard file system structure, scales automatically, requires minimum operational overhead = Amazon Elastic File System (Amazon EFS)

wantuOption: C
Nov 28, 2023

Palabras clave: autoescalado y ficheros

awsgeek75Option: C
Jan 14, 2024

Standard file system that is highly available: EFS Autoscaling highly available system: EC2 or ECS or EKS can work A: Not suitable due to S3 which is BLOB not file system B: EKS is ok but EBS is not HA D: EBS is not HA So by elimination, C is best option.

sidharthwader
Feb 26, 2024

C is the only option which supports standard file system when we talk about high availability. EBS scope is within a availability zone but EFS has scope of a region.

HectorCostaOption: C
May 2, 2024

Key words: Standard File System and Scales Automatically. S3 is object Store, hence if fails with the "Standard File System" requirement, so we can discard A. EBS does not scale automatically, failing with the "Scales Automatically" requirement, so we can discard B and D

bishtr3
Jul 17, 2024

C : EFS as It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. Multiple compute instances, including Amazon EC2, Amazon ECS, and AWS Lambda, can access an Amazon EFS file system at the same time, providing a common data source for workloads.