SAA-C03 Exam QuestionsBrowse all questions from this exam

SAA-C03 Exam - Question 33


A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

Show Answer
Correct Answer: C

To build a scalable, near-real-time solution for real-time sharing of millions of financial transactions, Amazon Kinesis Data Streams can be used to stream data. AWS Lambda can then process this data to remove sensitive information before the transactions are stored in Amazon DynamoDB, which offers low-latency retrieval. This setup ensures real-time data streaming and processing, meets scalability requirements, and allows other applications to consume data from Kinesis Data Streams efficiently.

Discussion

17 comments
Sign in to comment
ArielSchivoOption: C
Oct 17, 2022

I would go for C. The tricky phrase is "near-real-time solution", pointing to Firehouse, but it can't send data to DynamoDB, so it leaves us with C as best option. Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace, Sumologic, LogicMonitor, MongoDB, and HTTP End Point as destinations. https://aws.amazon.com/kinesis/data-firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.

Lonojack
Jan 25, 2023

This was a really tough one. But you have the best explanation on here with reference point. Thanks. I’m going with answer C!

lizzard812
Feb 3, 2023

Sorry but I still can't see how Kinesis Data Stream is 'scalable', since you have to provision the quantity of shards in advance?

habibi03336
Feb 22, 2023

"easily stream data at any scale" This is a description of Kinesis Data Stream. I think you can configure its quantity but still not provision and manage scalability by yourself.

SaraSundaram
Mar 19, 2023

There are many questions having Firehose and Stream. Need to know them in detail to answer. Thanks for the explanation

diabloexodia
Jul 13, 2023

Stream is used if you want real time results , but with firehose , you generally use the data at a later point of time by storing it somewhere. Hence if you see "REAL TIME" the answer is most probably Kinesis Data Streams.

JesseeS
Oct 19, 2022

The answer is C, because Firehose does not suppport DynamoDB and another key word is "data" Kinesis Data Streams is the correct choice. Pay attention to key words. AWS likes to trick you up to make sure you know the services.

TariqKipkemeiOption: C
Aug 3, 2023

Scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications = Amazon Kinesis Data Streams. Remove sensitive data from transactions = AWS Lambda. Store transaction data in a document database for low-latency retrieval = Amazon DynamoDB.

Pics00094Option: C
Feb 18, 2024

need to know.. 1) Lambda Integration 2) Difference between Real time(Kinesis Data Stream) vs Near Real time(Kinesis Fire House) 3) Firehouse can't target DynamoDB

AWSStudyBuddy
Oct 19, 2023

C is the best solution for the following reasons: 1. Real-time Data Stream: To share millions of financial transactions with other apps, you need to be able to ingest data in real-time, which is made possible by Amazon Kinesis Data Streams. 2. Data Transformation: You can cleanse and eliminate sensitive data from transactions before storing them in Amazon DynamoDB by utilizing AWS Lambda with Kinesis Data Streams. This takes care of the requirement to handle sensitive data with care. 3. Scalability: DynamoDB and Amazon Kinesis are both extremely scalable technologies that can manage enormous data volumes and adjust to the workload. 4. Low-Latency retrieval: Applications requiring real-time data can benefit from low-latency retrieval, which is ensured by storing the processed data in DynamoDB.

AWSStudyBuddy
Oct 19, 2023

Choices A, B, and D are limited in certain ways: • Real-time data streaming is not provided by Option A (DynamoDB with Streams); additional components would need to be implemented in order to handle data in real-time. • Kinesis Data Firehose, Option B, lacks the real-time processing capabilities of Kinesis Data Streams and is primarily used for data distribution to destinations like as S3. • For near-real-time use cases, Option D (Batch processing with S3) is not the best choice. It adds latency and overhead associated with batch processing, which is incompatible with the need for real-time data sharing. Using the advantages of Lambda, DynamoDB, and Kinesis Data Streams, Option C offers a scalable, real-time, and effective solution for the given use case.

awsgeek75Option: C
Jan 13, 2024

A: DynamoDB streams are logs, not fit for real-time sharing. B: S3 is not document database, it's BLOB D: S3 and files are not database C: Kinesis + Lambda + DynamoDB is high performance, low latency scalable solution.

JulianWaksmann
Feb 5, 2024

i think c are bad too, because it isn't near real time.

vi24
Mar 8, 2024

I chose B. The "near real time" is very specific to Kinesis firehose which is a better option anyway. The rest of the answer makes sense too. C is wrong : "sensitive data removed by Lambda & then store transaction data in DynamoDB" , while it continues to say other applications are accessing the transaction data from kinesis Data stream !!

sohailn
Aug 9, 2023

kinesis Data Firhouse optionally support lambda for transformation

Ak9kumar
Sep 24, 2023

I picked B. We need to understand how Kinesis Data Warehouse works to answer this question right.

spw7
Oct 27, 2023

firehose can not send data to dynamoDB

Ruffyit
Oct 26, 2023

C is the best solution for the following reasons: 1. Real-time Data Stream: To share millions of financial transactions with other apps, you need to be able to ingest data in real-time, which is made possible by Amazon Kinesis Data Streams. 2. Data Transformation: You can cleanse and eliminate sensitive data from transactions before storing them in Amazon DynamoDB by utilizing AWS Lambda with Kinesis Data Streams. This takes care of the requirement to handle sensitive data with care. 3. Scalability: DynamoDB and Amazon Kinesis are both extremely scalable technologies that can manage enormous data volumes and adjust to the workload.

wabosiOption: C
Nov 11, 2023

Correct answer is C. As some commented already, 'near-real-time' could make you think abut Firehose but its consumers are 3rd-party partners destinations, Amazon S3, Amazon Redshift, Amazon OpenSearch and HTTP endpoint so DynamoDB can't be used in this scenario.

djgodzillaOption: C
Dec 14, 2023

Kinesis Data Streams stores data for later processing by applications , key difference with Firehose which delivers data directly to AWS services.

bujumanOption: C
Dec 21, 2023

Data Stream can handle near-real-time and is able to store to DynamoDB

A_jaaOption: C
Jan 13, 2024

Answer-C

the_mellieOption: C
May 24, 2024

with multiple consumers and on the fly modification, it seems like the most logical choice

Lin878Option: C
Jul 7, 2024

Q: What is a destination in Firehose? A destination is the data store where your data will be delivered. Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace, Sumo Logic, LogicMonitor, MongoDB, and HTTP End Point as destinations. https://aws.amazon.com/firehose/faqs/