Which is correct?
Given the requirements to stay with the current design while minimizing costs, using Auto-Scaled workers triggered by queue depth and spot instances in SQS is optimal for processing messages efficiently and cost-effectively. Once the data is processed, using Glacier for archival storage further reduces costs due to Glacier's low storage cost for infrequently accessed data. This setup leverages AWS services effectively to minimize operational costs while meeting the manager's requirements.
Community votes
No votes yet
Which approach provides a cost effective scalable mitigation to this kind of attack?
The most effective and scalable solution involves adding a new WAF tier. Creating an additional ELB and an AutoScaling group of EC2 instances running a WAF ensures that traffic is filtered before it reaches the web tier, providing better security. This approach allows for dynamic scaling based on traffic load and sophisticated filtering rules that can adapt to new threats. While the description of a 'host-based WAF' might be somewhat misleading, it clearly implies leveraging a third-party WAF solution. This method effectively mitigates the risk of unauthorized access by providing a robust, scalable defense mechanism at the perimeter of the infrastructure.
Community votes
No votes yet
✑ Provide the ability for real-time analytics of the inbound biometric data
✑ Ensure processing of the biometric data is highly durable. Elastic and parallel
✑ The results of the analytic processing should be persisted for data mining
Which architecture outlined below win meet the initial requirements for the collection platform?
Utilizing Amazon Kinesis to collect the inbound sensor data allows for real-time analytics due to its ability to process streaming data with low latency. Kinesis clients can be employed to analyze the data in parallel, ensuring the processing is highly durable and elastic. Saving the results to a Redshift cluster using EMR (Elastic MapReduce) fits the requirement for data mining, as Redshift is optimized for complex queries and large-scale data analysis.
Community votes
No votes yet
The application must have a highly available architecture.
Which alternatives should you consider? (Choose two.)
To design Internet connectivity for a VPC with highly available web servers, using Elastic Load Balancer (ELB) and Route 53 CNAME record or assigning Elastic IPs (EIPs) to web servers with health checks and DNS failover are effective strategies. Placing web servers behind an ELB ensures traffic is distributed evenly and remains available if some servers fail. Route 53 can point to the ELB DNS for seamless traffic management. Assigning EIPs to web servers allows Route 53 to manage traffic based on health checks, ensuring high availability and failover capabilities. Both approaches meet the requirement for high availability and Internet-facing connectivity.
Community votes
No votes yet
Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC.
The optimal setup for persistence and security that meets the above requirements would be the following.
The optimal setup is to create your RDS instance separately and pass its DNS name to your application's DB connection string as an environment variable. By creating a security group for client machines and adding it as a valid source for DB traffic to the security group of the RDS instance itself, you ensure both persistence and security of the database instance. This approach decouples the database from the Elastic Beanstalk environment, thereby avoiding the risk of losing the data if the environment is terminated, and it provides the flexibility to manage security group rules more securely and selectively.
Community votes
No votes yet