Which of the following statements describes streaming with Spark as a model deployment strategy?
Which of the following statements describes streaming with Spark as a model deployment strategy?
Streaming with Spark as a model deployment strategy involves processing data in small, incremental batches (micro-batches) as it arrives. The data is processed incrementally, and results are updated continuously in real-time. This continuous processing is typically initiated by triggers, which can be time intervals or other conditions. Therefore, the correct description is the inference of incrementally processed records as soon as a trigger is hit.
The correct answer is D. The inference of incrementally processed records as soon as a trigger is hit. In this context, a “trigger” refers to the condition that initiates the processing of the next set of data. This could be a time interval (e.g., process new data every second), a data size (e.g., process every 1000 records), or other custom conditions
Streaming with Spark as a model deployment strategy involves processing data in small, incremental batches (micro-batches) as it arrives. Spark Structured Streaming allows for continuous processing of streaming data, where the data is processed incrementally and results are updated in real-time. The processing is typically triggered at regular intervals, known as trigger
Incrementally processed records with a Spark job: Spark jobs are typically used for initiating processing, but triggers are more common for continuous inference in streaming scenarios.