Exam SAA-C03 All QuestionsBrowse all questions from this exam
Question 52

A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, and requires minimum operational overhead.

Which solution will meet these requirements?

    Correct Answer: C

    The application data must be stored in a standard file system structure that scales automatically, is highly available, and requires minimum operational overhead. Amazon Elastic File System (Amazon EFS) fits these requirements perfectly, as it provides a fully managed, scalable, and highly available file system. By combining this with Amazon EC2 instances in a Multi-AZ Auto Scaling group, the application can achieve high availability and scalability with minimal operational overhead.

Discussion
ArielSchivoOption: C

EFS is a standard file system, it scales automatically and is highly available.

masetromain

I have absolutely no idea... Output files that vary in size from tens of gigabytes to hundreds of terabytes Simit size for a single object: S3 5To TiB https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/ EBS 64 Tib https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html EFS 47.9 TiB https://docs.aws.amazon.com/efs/latest/ug/limits.html

JayBee65

S3 and EBS are block storage but you are looking to store files, so EFS is the correct option.

Ello2023

S3 is object storage.

OmegaLambda7XL9

A lil correction,S3 is Object storage not Block Storage

RBSK

None meets 100s of TB / file. Bit confusing / misleading

Help2023

The answer to that is Limit size for a single object: S3, 5TiB is per object but you can have more than one object in a bucket, meaning infinity https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/ EBS 64 Tib is per block of storage https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html EFS 47.9 TiB per file and in the questions its says Files the 's' https://docs.aws.amazon.com/efs/latest/ug/limits.html

cookieMrOption: C

EFS provides a scalable and fully managed file system that can be easily mounted to multiple EC2. It allows you to store and access files using the standard file system structure, which aligns with the company's requirement for a standard file system. EFS automatically scales with the size of your data. A suggests using ECS for container orchestration and S3 for storage. ECS doesn't offer a native file system storage solution. S3 is an object storage service and may not be the most suitable option for a standard file system structure. B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system access. While EKS can manage containers, it doesn't specifically address the file storage requirements. D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn't inherently offer a scalable file system solution like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.

Mikado211Option: C

Technically the A could work, ECS is often recommended by AWS in case of minimum operational overhead, and S3 is durable and highly scalable BUT it is not a "traditional" file system structure. In an S3 bucket, there is no real file structure, only files and prefixes that simulate a structure. B is wrong because of EKS which require more management EFS is recommended for minimum operational overhead instead of EBS. So C (EC2 + EFS) is recommended here over D (EC2 + EBS).

pentium75Option: C

"File system structure" = EFS, which also meets all the other requirements.

leosmal

The key is Multi-AZ ,EBS does not support it.

bishtr3

C : EFS as It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. Multiple compute instances, including Amazon EC2, Amazon ECS, and AWS Lambda, can access an Amazon EFS file system at the same time, providing a common data source for workloads.

HectorCostaOption: C

Key words: Standard File System and Scales Automatically. S3 is object Store, hence if fails with the "Standard File System" requirement, so we can discard A. EBS does not scale automatically, failing with the "Scales Automatically" requirement, so we can discard B and D

sidharthwader

C is the only option which supports standard file system when we talk about high availability. EBS scope is within a availability zone but EFS has scope of a region.

awsgeek75Option: C

Standard file system that is highly available: EFS Autoscaling highly available system: EC2 or ECS or EKS can work A: Not suitable due to S3 which is BLOB not file system B: EKS is ok but EBS is not HA D: EBS is not HA So by elimination, C is best option.

wantuOption: C

Palabras clave: autoescalado y ficheros

TariqKipkemeiOption: C

Standard file system structure, scales automatically, requires minimum operational overhead = Amazon Elastic File System (Amazon EFS)

miki111

Option C is the correct answer

BmarodiOption: C

Option C meets the requirements.

joshnortOption: C

Keywords: file system structure, scales automatically, highly available, and minimal operational overhead

harirkmusa

standard file system structure is the KEYWORD here, the S3 and EBS are not file based storage. EFS is. so the automatic answer is C

NitiATOSOption: C

I will go with C as If the app is deployed in MultiAZ, computes are different but the Storage needs to be common. EFS is easist way to configure shared storage as compared to SHARED EBS. Hence C Suits the best.