Exam SAP-C02 All QuestionsBrowse all questions from this exam
Question 447

A company is deploying a new web-based application and needs a storage solution for the Linux application servers. The company wants to create a single location for updates to application data for all instances. The active dataset will be up to 100 GB in size. A solutions architect has determined that peak operations will occur for 3 hours daily and will require a total of 225 MiBps of read throughput.

The solutions architect must design a Multi-AZ solution that makes a copy of the data available in another AWS Region for disaster recovery (DR). The DR copy has an RPO of less than 1 hour.

Which solution will meet these requirements?

    Correct Answer: A

    The correct solution is to deploy a new Amazon Elastic File System (Amazon EFS) Multi-AZ file system. EFS supports cross-region replication, meeting the requirement for a Multi-AZ solution with a disaster recovery copy in another region. EFS can accumulate bursting credits, allowing it to handle peak throughput requirements. With a provisioned throughput of 75 MiBps and the capability to burst up to 300 MiBps per 100 GB of storage, EFS can effectively manage the peak operational needs while maintaining a consistent and reliable RPO of less than 1 hour through its replication capabilities.

Discussion
e4bc18e

So practically everyone here is wrong. because it is A. Here is why B is wrong because one there is no such thing as bursting mode for Lustre that is an EFS thing, but also Backup will not work for the RPO. C is wrong obviously because GP3 can't be shared. D is wrong because Datasync tasks cannot be scheduled for any more frequent then hourly so no D is wrong because you cannot schedule data sync tasks less then hourly so you don't meet the RPO. So all of those are easily wrong because they have bad information. They fooled everyone on A because all they say is the 'Active working set is 100GB" not the entire filesystem. EFS accumulates bursting credits so for every 100GB of filesystem size you can burst up to 300MiBps for up to 72 minutes. So you provision 75MiBps because that would average out over time so you aren't being overcharged for the provisioned size.

pangchnOption: D

D a sneaky question since my first impression is go for A but it is wrong due to the 75M throughput mode. What's the calculation here? one region has 3 AZ? so 75x3=225?. EFS is not provisioned in that way. Even that, the 225 is the total throughput where question asked 225 for read. Implied the total would be more like 225+XXX. Anyway, A is wrong. https://docs.aws.amazon.com/efs/latest/ug/performance.html C is wrong since EBS multi attach don't support gp3 https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes-multi.html

pangchn

B is wrong where the hourly AWS backup job won't meet the RPO requirement (less than 1 hour) The backup frequency determines how often AWS Backup creates a snapshot backup. Using the console, you can choose a frequency of every hour, 12 hours, daily, weekly, or monthly. You can also create a cron expression that creates snapshot backups as frequently as hourly. Using the AWS Backup CLI, you can schedule snapshot backups as frequently as hourly https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html

HelpnosenseOption: A

A. EFS support cross region replication. e4bc18e already point why D is wrong.

trungtdOption: A

big thank to e4bc18e

Zas1Option: A

A Solution write by e4bc18e

VerRiOption: A

D involves managing separate file systems that do not natively offer a "single location" experience across regions without additional configuration and replication mechanisms.

vip2Option: A

A scheduled task runs at a frequency that you specify, with a minimum interval of 1 hour. https://docs.aws.amazon.com/datasync/latest/userguide/task-scheduling.html

titi_rOption: D

D is correct. "You can use DataSync to transfer files between two FSx for OpenZFS file systems, and also move data to a file system in a different AWS Region or AWS account. You can also use DataSync with FSx for OpenZFS file systems for other tasks. For example, you can perform one-time data migrations, periodically ingest data for distributed workloads, and schedule replication for data protection and recovery." https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/migrate-files-to-fsx-datasync.html

e4bc18e

This is wrong a Datasync task cannot be schedule for any more frequent then one hour so the under 1 hour RPO is not met.

titi_r

@e4bc18e, it seems you are right. Indeed, DataSync can go as granular as 1 hour. Found this: "If the file system’s baseline throughput exceeds the Provisioned throughput amount, then it automatically uses the Bursting throughput..." For 1 TiB of metered data in Standard storage, it can burst to 300 MiBps read-only for 12 hours per day. https://docs.aws.amazon.com/efs/latest/ug/performance.html#throughput-modes

ovladan

Selected Answer: B https://docs.aws.amazon.com/fsx/latest/LustreGuide/performance.html#fsx-aggregate-perf

titi_r

“B” is wrong because with AWS Backup you can do a backup as frequent as 1 hour, but the RPO must be less than 1 hour. https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-console

adelynllllllllll

D: The throughput is related to size of the EFS, but the question said the active set of the data will be only up to 100GB, with that size, the throughout will be lower than requested. so D:

DgixOption: D

D is the answer. A would also have worked.

CMMCOption: D

Amazon FSx for OpenZFS is a fully managed file system service that supports native replication between regions, making it well-suited for DR scenarios with a low RPO requirement. Using AWS DataSync for replication every 10 minutes ensures that the DR copy stays up to date with minimal data loss. This solution provides the required read throughput, data replication, and DR capabilities with less operational overhead.

e4bc18e

Wrong Datasync tasks cannot be scheduled to be more frequent then hourly, so you cannot schedule data sync tasks to be every 10 Minutes. Apparently everyone is forgetting about burst credits for EFS. Probably something a little missing but it only says the "Active working set" is 100GB" not the entire filesystem. For every 100GB of data of provisioned EFS space you can burst to 300MiBps for 72 minutes.