Exam Certified Data Engineer Associate All QuestionsBrowse all questions from this exam
Question 70

A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.

The code block used by the data engineer is below:

If the data engineer only wants the query to process all of the available data in as many batches as required, which of the following lines of code should the data engineer use to fill in the blank?

    Correct Answer: B

    In Structured Streaming, to process all available data in as many batches as required, the data engineer should use the trigger method with availableNow set to True. The code trigger(availableNow=True) will process all available data in the source table at the start and terminate after processing it in multiple batches. This is useful when there is a need to process existing data without waiting for new data to arrive.

Discussion
kbaba101Option: B

B availableNowbool, optional if set to True, set a trigger that processes all available data in multiple batches then terminates the query. Only one trigger can be set.

meow_akkOption: B

sorry Ans is B : https://stackoverflow.com/questions/71061809/trigger-availablenow-for-delta-source-streaming-queries-in-pyspark-databricks for batch we use available now

fifirifiOption: B

correct answer: B explanation: In Structured Streaming, if a data engineer wants to process all the available data in as many batches as required without any explicit trigger interval, they can use the option trigger(availableNow=True). This feature, availableNow, is used to specify that the query should process all the data that is available at the moment and not wait for more data to arrive.

55f31c8Option: B

https://spark.apache.org/docs/latest/api/python/reference/pyspark.ss/api/pyspark.sql.streaming.DataStreamWriter.trigger.html

benni_aleOption: B

b is ok

AndreFROption: B

it’s the only answer with a correct syntax

meow_akkOption: D

Correct Ans is D : %python spark.readStream.format("delta").load("<delta_table_path>") .writeStream .format("delta") .trigger(processingTime='5 seconds') #Added line of code that defines .trigger processing time. .outputMode("append") .option("checkpointLocation","<checkpoint_path>") .options(**writeConfig) .start() https://kb.databricks.com/streaming/optimize-streaming-transactions-with-trigger