Exam DVA-C01 All QuestionsBrowse all questions from this exam
Question 110

A developer manages an application that interacts with Amazon RDS. After observing slow performance with read queries, the developer implements Amazon

ElastiCache to update the cache immediately following the primary database update.

What will be the result of this approach to caching?

    Correct Answer: C

    Updating the cache immediately following the primary database update implements a write-through strategy. In this approach, every update to the database is also made to the cache, leading to a situation where infrequently requested data is also cached. This can result in the cache growing large and potentially becoming expensive, as it will store data that may never be read.

Discussion
CHRIS12722222

C. This is write through strategy

Awsexam100

its D There is a cache miss penalty. Each cache miss results in three trips: Initial request for data from the cache Query of the database for the data Writing the data to the cache These misses can cause a noticeable delay in data getting to the application. https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html Lazy loading

rcaliandroOption: C

C, each update to the database is also reversed to the cache (write through instead of lazy loading cache strategy). Given that each update/write to the DB is reversed also to the cache, also for really infrequent data, we have as result a really heavy cache

BobAWS23Option: C

Elasticache can implement both write-through and lazy loading. The key phrase was: to update the cache immediately following the primary database update. This is write-through. Look at "Cache churn." https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

PawKam

I don't understand this comment. The "cache churn" it refers clearly states "The disadvantages of write-through are as follows [...] most data is never read, which is a waste of resources.", which points to D.

PawKam

Sorry, my bad, I mixed answers. C seems to be correct. Now I understand this comment.

xdkonorek2Option: A

imo it's A It's only obvious answer, per write you have to read the updated record from database - because not every update has to be a full record, and in relational databases update operation returns number of rows updated, not whole entities. So you have to follow up with a read op B - this behavior isn't defined in the question C - how do we know there is infrequently accessed data at all? how do we know TTL in cache? we don't know D - "cache is updated only after a cache miss" this behavior wasn't defined in a question, cache is updated only on updates regardless of cache key missing or not

SyreOption: A

Answer here is A. Option C is incorrect because infrequently requested data should not be written to the cache, as this can cause the cache to become bloated and inefficient. Option D is incorrect because the entryPoint parameter is used to specify a command that is run when the container starts, and is not related to passing environment variables to the container.

ics_911

Buddy you need to study more. Most of your answers were wrong in judgment and explanation.

qiaoliOption: C

the scenario is about write through, so C. D is about lazy loading, it's not mentioned

gaddour_medOption: C

it can not be D. because strategy used in question is cache is updated for each data update in databae not when cache is missing.

tony554556

C is correct, your explanation is very clear. Thanks

BhagyashreeC

It is write-through caching. which updates the cache only when there is an item update/addition in DB. -> can cause infrequently requested data to be written to cache as well (unless TTL is defined). Now, during a read if the item is not present in cache, then it will result in cache miss and the req will go to DB. But with write-through this will NOT update the cache (because it's not an update/add operation) -> rules out options B and D. (unless lazy load is configured which is not the case) Every write involves two trips: A write to the cache A write to the database -> I believe this rules out option A Ref: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html#Strategies.WriteThrough

Ibrahim24Option: C

C: The cache will be updated with every change in DB although that data is not being read frequently. This is write-through strategy D cannot be the right answer since the cache is being updated with DB change, not on cache miss

SD_CSOption: C

there would be lot of writes

BATSIE

D, When the cache cannot find the requested data, it is referred to as a cache miss. In this scenario, after the primary database is updated, the cache is immediately updated. However, if a read query is made and the requested data is not found in the cache, there will be a cache miss, which will cause overhead in the initial response time. The cache will then be updated with the requested data, and subsequent read queries for the same data will be faster because the data is already in the cache.

Rpod

C . Cache will become expensive and huge.

KrokOption: C

C. This is Write Through strategy. As described in the course by Stephane Maarek on Udemi this approach has the following Cons: "Cache churn – a lot of the data will never be read"

PhinxOption: C

I would go for C.

bearcandyOption: D

It would be C if it didn't include ElastiCache, this technique is called write-through. As it mentions ElastiCache the technique is Lazy Loading, so the answer is D. Look at the official documentationl: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

Phinx

Elasticache can do both lazy loading and write-through. The catch here is to "update the cache immediately following the primary database"

sichilam

C is correct