Which of the following will cause a Spark job to fail?
Which of the following will cause a Spark job to fail?
A failed driver node will cause a Spark job to fail because the driver node is responsible for orchestrating the entire execution process, including scheduling tasks and managing their execution. If the driver node fails, the job cannot continue. In contrast, the failure of a worker node (option D) can be handled by Spark’s fault-tolerance mechanisms, which can retry tasks on other nodes. The other options (A, B, and C) describe scenarios that would lead to degraded performance or require resource adjustments but would not directly cause a Spark job to fail.
E is correct