Exam DOP-C02 All QuestionsBrowse all questions from this exam
Question 195

A company is developing an application that will generate log events. The log events consist of five distinct metrics every one tenth of a second and produce a large amount of data.

The company needs to configure the application to write the logs to Amazon Timestream. The company will configure a daily query against the Timestream table.

Which combination of steps will meet these requirements with the FASTEST query performance? (Choose three.)

    Correct Answer: A, D, F

    To achieve the fastest query performance when writing log events to Amazon Timestream, the following steps should be taken: Batch writes significantly reduce overhead by minimizing the number of individual write operations, thus increasing overall write throughput. Treating each log as a multi-measure record allows efficient data retrieval and aggregation during queries, as it reduces the number of records to be processed. Configuring the memory store retention period to be shorter than the magnetic store retention period ensures that the storage optimized for quick data access (memory store) is used efficiently without unnecessary older data occupying it, thereby maintaining the speed for recent queries.

Discussion
Diego1414Options: ADF

ADF – batch writes, Treat log as multi-measure record, Memory story should be shorter,. https://aws.amazon.com/blogs/database/improve-query-performance-and-reduce-cost-using-scheduled-queries-in-amazon-timestream/#:~:text=Improve%20query%20performance%20and%20reduce%20cost%20using%20scheduled,6%20Query%20performance%20metrics%20...%207%20Conclusion%20

Ramdi1Options: ACD

A. Batch writes: This significantly reduces overhead associated with individual write operations and improves overall write throughput. C. Single-measure record: For daily queries summarizing multiple metrics, treating each log as a single record helps Timestream leverage its optimized storage and query processing for single measures. D. Multi-measure record: While it seems counterintuitive, Timestream performs better with multiple measures within a single record compared to separate records for each metric. This allows for efficient data retrieval and aggregation during queries.

Ramdi1

Options B, E, and F are not recommended for optimal performance: B. Single write operations: This increases overhead and reduces write throughput, negating Timestream's scalability benefits. E. Longer memory store: While faster for recent data, it increases cost and doesn't impact daily queries focused on older, magnetic store data. F. Shorter memory store: Reduces cost but sacrifices potential performance gains for frequently accessed recent data, which might not be relevant for daily queries. By combining batch writes, single-measure records, and multi-measure records, the company can achieve the fastest query performance for their daily Timestream use case.

vortegonOptions: ADF

While E suggests configuring the memory store retention period to be longer than the magnetic store retention period, this is typically not aimed at optimizing query performance but rather at keeping data in the faster-access memory store for longer periods, which could be beneficial for workloads requiring frequent access to recent data. However, for the scenario described, focusing on efficient data ingestion methods (A and D) and understanding the role of retention periods (F) provides a balanced approach to achieving the fastest query performance for daily queries.

didek1986Options: ADF

ADF A - improve write performance and efficiency D - query for a specific measure in a multi-measure record, Timestream only scans the relevant measure, not the entire record. This means that even though the record contains multiple measures, the query performance for a specific measure is not negatively impacted. Multi-measure record reduces the number of records that need to be written and subsequently queried, which improve query performance. F - memory store, which is optimised for write and query performance, is not filled with older data that is not frequently accessed

dkpOptions: ADF

ADF seems more relevant

thanhnv142Options: ADF

A,D and F are correct: A: do job in batch optimize costs and performance B: should not do single write C: The app emits multiple records the same time. So it should be multi-measure record, not single one D: correct E: Should not do this F: correct

GomerOptions: ADE

My only hesitation is in regards to how batch writes might improve query performance, other than if the stored data is in a contiguous chunk, that could hep a query later. As far as for multi-measure and more memory, I defer to references: A: (YES) "When writing data to InfluxDB, write data in batches to minimize the network overhead related to every write request." D: (YES) "Multi-measure records results in lower query latency for most query types when compared to single-measure records." E: (YES) "The memory store is optimized for high throughput data writes and fast point-in-time queries." F: (NO) "The magnetic store is optimized for lower throughput late-arriving data writes, long term data storage, and fast analytical queries."

Gomer

Memory based storage is always going to provide "FASTEST query performance" compared to magnetic storage. You want faster query, provide higher ratio of memory storage compared to magnetic.

Chelseajcole

ACE. Batch the write, write as whole and stay in memory longer