Question 6 of 208

A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications.

The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.

What should a DevOps engineer do to meet these requirements?

    Correct Answer: D

    To meet the company's requirements of centralized control of source code, a consistent and automatic delivery pipeline, and minimal maintenance tasks on the underlying infrastructure, the optimal solution involves creating one AWS CodeCommit repository for each application. Using AWS CodeBuild to build one Docker image for each application and storing these images in Amazon Elastic Container Registry (Amazon ECR) ensures that the builds are consistent. Deploying the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure managed by AWS Fargate further reduces the maintenance burden, as Fargate provides serverless compute orchestration, eliminating the need to manage servers directly.

Question 7 of 208

A DevOps engineer is developing an application for a company. The application needs to persist files to Amazon S3. The application needs to upload files with different security classifications that the company defines. These classifications include confidential, private, and public. Files that have a confidential classification must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations.

The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user's ID to each confidential object that is uploaded to Amazon S3.

Which set of additional steps must the DevOps engineer take to meet the company's requirements?

    Correct Answer: A

    To meet the company's requirements that files with a confidential classification must only be viewable by the user who uploaded them, the DevOps engineer should modify the S3 bucket's ACL to grant `bucket-owner-read` access to the uploading user's IAM role. Additionally, creating an IAM policy that grants `s3:GetObject` operations on the S3 bucket when `aws:ResourceTag/DataClassification` equals confidential and `s3:ExistingObjectTag/Owner` equals `${aws:userid}` ensures that access is restricted only to the uploading user. This approach aligns with the need to ensure that only the owner of a confidential file can view it, thereby adhering to the stringent security requirements specified.

Question 8 of 208

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.

A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps

Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.

How should the DevOps Engineer overcome this?

    Correct Answer: A

    To ensure the Lambda function doesn't start handling orders before the necessary database changes have fully propagated, you should add a BeforeAllowTraffic hook to the AppSpec file. This hook allows you to test and confirm that all required database changes are completed before the new version of the Lambda function starts handling traffic, preventing intermittent failures during deployment.

Question 9 of 208

A software company wants to automate the build process for a project where the code is stored in GitHub. When the repository is updated, source code should be compiled, tested, and pushed to Amazon S3.

Which combination of steps would address these requirements? (Choose three.)

    Correct Answer: A, B, C

    To automate the build process for a project where the code is stored in GitHub and ensure the source code is compiled, tested, and pushed to Amazon S3, the following steps are appropriate: Adding a buildspec.yml file to the source code will define the build instructions. Configuring a GitHub webhook to trigger a build every time a code change is pushed to the repository ensures that builds are automatically initiated. Creating an AWS CodeBuild project with GitHub as the source repository will handle the build process itself using the instructions defined in the buildspec.yml file. These steps together ensure a continuous integration and delivery pipeline that meets the project's requirements.

Question 10 of 208

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on

Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.

When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.

How should the company meet these requirements with the LEAST amount of application changes?

    Correct Answer: C

    To meet the company's requirements with the least amount of application changes, the company should use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases. This approach leverages the existing Aurora setup, thereby minimizing changes to the application. Using different database technologies such as Amazon DynamoDB or Amazon Redshift would require significant changes to the application schema and CRUD operations, which contradicts the requirement of having the least amount of application changes. Aurora read replicas will ensure a single product catalog across all regions, while local Aurora instances in each region will maintain compliance by keeping customer information and purchases localized.