Exam Certified Data Engineer Professional All QuestionsBrowse all questions from this exam
Question 60

The data engineering team maintains a table of aggregate statistics through batch nightly updates. This includes total sales for the previous day alongside totals and averages for a variety of time periods including the 7 previous days, year-to-date, and quarter-to-date. This table is named store_saies_summary and the schema is as follows:

The table daily_store_sales contains all the information needed to update store_sales_summary. The schema for this table is: store_id INT, sales_date DATE, total_sales FLOAT

If daily_store_sales is implemented as a Type 1 table and the total_sales column might be adjusted after manual data auditing, which approach is the safest to generate accurate reports in the store_sales_summary table?

    Correct Answer: A

    The safest approach to generate accurate reports in the store_sales_summary table is to implement the appropriate aggregate logic as a batch read against the daily_store_sales table and overwrite the store_sales_summary table with each update. This ensures that all data is recalculated based on the most recent and accurate information, effectively handling any adjustments made to the total_sales column after manual data auditing. This method avoids complications that can arise from incremental updates or upserts and eliminates any potential inconsistencies that could result from partial updates.

Discussion
hammer_1234_hOption: A

The answer should be A. it is the safest to generate accurate report

alexvno

Incorrect BATCH processing and OVERWRITE will give partial results

Def21

This is confusing: "overwrite the store_sales_summary table with each Update." sounds like it is only doing updates, not inserting new possible stories.

Somesh512Option: A

I would go with Option A. Because it has manual auditing hence values can change. Uses type 1 hence replace original data

Luv4dataOption: A

The answer is A. Note that the target table has columns which stores quarter to date,previous day sates etc, which will result in daily updates, i.e. large volume of records will be updated, hence better to overwirte than to update large volume of records.

alexvnoOption: C

Batch processing so you need to update and insert - C

EnduresoulOption: C

Answer C is correct. Answer E would do the job too, but the table schema and the question indicates, that there will be only one update daily needed. Therefore a structured streaming job is way too expensive to archive the outcome.

Gulenur_GS

You are absolutely right!

vikram12aprOption: C

Not A because overwriting will only provide a daily based data not the history of it. Not B because it will not fix the issue of incorrect sales amount As these data are fit for natch processing so neither D or E. C will only upsert the changes while making sure we are updating the records based on sales_date & store_id

divingbell17Option: A

The question is unclear whether the aggregated table needs to support a rolling history. Note the aggregated table does not have a date column to distinguish which date the summary is generated for so one could assume the table is maintained only for the current snapshot. Assuming the above - A would be the safest option as all stores and aggregates would need to be refreshed nightly

dmovOption: A

A is correct because it's a static table that is written nightly through a batch job. The summary table does not maintain history and so an upsert results in having extra, unecessary records. Overwrite it nightly with updated aggregates for the required time period.

Def21

"Safest" probably includes having Delta table. And history is maintained anyway.

sturcuOption: E

I would say that it is E. If daily_store_sales table is implemented as a Type 1 table, this means that values are overwritten, and we do not keep the history. So we would need to create a streaming from CDF and apply those changes into the aggregated table.

sturcu

manual data auditing, implies we do not know when a change is made, hence we do not know when to schedule the "batch update" for the aggregated table

Ati1362Option: C

I will go with c. upsert

MDWPartnersOption: C

A is not correct because the table is daily. If you overwrite you delete all history. You need to insert/update to keep history.

ThoBustosOption: A

Not sure if that's right but I would go for A. What do you think? Type1: Data is overwritten Type 2: History is maintained, new data is inserted as new rows Type 3: Stores two versions per record: a previous and a current value A. batch + overwrite -> Match Type 1 requirements. YES B: batch + append new rows -> Would be for type 2. NO C. Batch + Upsert -> Data is not being overwritten (which is required for Type 1). NO D. ReadStream + Upsert -> Data is not being overwritten (which is required for Type 1). NO E. Change Data Feed to update -> Problem is manual edits + not overwriting (required for type 1). No I have doubts around "which approach is the safest". Maybe because due to some manual changes it is hard to track changes or do upsert, so to make sure that the stats are right overwriting is safer.

RinscyOption: E

E definitely because it say that the total_sales column may be change by manual auditing so not via a job, so streaming with CDF is the only option here !

spaceexplorerOption: E

It should be E, as structure streaming has built-in fault-tolerance feature.

RinscyOption: A

It said type 1 so A is the correct answer !

SydOption: A

Correct answer A Type 1 data is overwritten https://streamsets.com/blog/slowly-changing-dimensions-vs-change-data-capture/#:~:text=In%20Type%201%2C%20any%20new,change%20to%20maintain%20a%20history.