Exam Certified Data Engineer Associate All QuestionsBrowse all questions from this exam
Question 77

A data engineer and data analyst are working together on a data pipeline. The data engineer is working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming input. They now want to migrate their pipeline to use Delta Live Tables.

Which of the following changes will need to be made to the pipeline when migrating to Delta Live Tables?

    Correct Answer: A

    Delta Live Tables support both Python and SQL, eliminating the need to rewrite the pipeline entirely in one language. The medallion-based multi-hop architecture is supported in Delta Live Tables, so there's no need to abandon it. Delta Live Tables can handle streaming sources, so there's no requirement to switch to batch sources. Therefore, no specific changes are needed for migrating the pipeline to Delta Live Tables.

Discussion
AndreFROption: A

B - DLT support medallion architecture (see example in : https://docs.databricks.com/en/delta-live-tables/transform.html#combine-streaming-tables-and-materialized-views-in-a-single-pipeline) C - DLT can mix Python and SQL using multiple notebooks (according to https://docs.databricks.com/en/delta-live-tables/tutorial-python.html You cannot mix languages within a Delta Live Tables source code file. You can use multiple notebooks or files with different languages in a pipeline) D - DLT manage streaming sources using streaming tables (ex : https://docs.databricks.com/en/delta-live-tables/load.html#load-data-from-a-message-bus) E - DLT support python and sql (https://docs.databricks.com/en/delta-live-tables/tutorial-python.html and https://docs.databricks.com/en/delta-live-tables/tutorial-sql.html) Correct answer is A by elimination

meow_akkOption: A

i think its A ;

hussamAlHunaitiOption: D

I had the exam today and option A & B weren't exist, correct answer is D.

Arunava05

Cleared the exam today . Option A and B were not available in the exam . There was a different option which was correct.

HuroyeOption: A

the correct answer is A. DLT needs a notebook where you specify the processing

vigaroOption: D

None is never a solution

jaromargOption: D

D: Delta Live Tables is primarily designed to work with batch processing rather than streaming. This means that when migrating a pipeline to Delta Live Tables, any streaming sources used in the original pipeline will need to be replaced with batch sources. In the scenario described, where the raw source of the pipeline is a streaming input, the data engineer and data analyst will need to modify their pipeline to read data from a batch source instead. This could involve changing the way data is ingested and processed to align with batch processing paradigms rather than streaming. Additionally, Delta Live Tables enables the integration of both SQL and Python code within a pipeline, so there's no strict requirement to write the pipeline entirely in SQL or Python. Both the data engineer's Python code for the raw, bronze, and silver layers and the data analyst's SQL code for the gold layer can still be used within the Delta Live Tables environment. Overall, the key change needed when migrating to Delta Live Tables in this scenario is transitioning from a streaming input source to a batch source to align with the batch processing nature of Delta Live Tables.

jaromarg

Yes It must be A: Language Support: DLT allows the use of both SQL and Python, so you can integrate the existing Python and SQL code within the DLT framework.

nedloOption: A

It should be A. Medallion architecture can be used in DLT pipeline https://www.databricks.com/glossary/medallion-architecture "Databricks provides tools like Delta Live Tables (DLT) that allow users to instantly build data pipelines with Bronze, Silver and Gold tables from just a few lines of code."

hsksOption: A

Answer should be A.

kbaba101Option: A

In my opinion, this should be A. Assuming they were working on the same notebook, and weren't declaring the Streaming or Live keywords during development, the would probably need to do so before adding to the DLT workflow. and that is not in the option.

benni_aleOption: A

A is correct

kz_dataOption: A

I think the answer is A

mokraniOption: A

Response A: They have to adapt their notebook's code to be able to decalre the DLT pipeline. However, this option is not proposed in the answers so I think it might be A