The most cost-effective method would likely be option B, leveraging the Kubernetes Cluster Autoscaler to automatically start and stop nodes as they're needed.
Here's why:
Burst processing workloads, like the one described, benefit from the elasticity provided by cloud-based Kubernetes clusters. With Kubernetes Cluster Autoscaler, you can scale your cluster up when there's a demand for more resources (e.g., Monday mornings when the batch jobs need to run) and scale it down during periods of low demand (e.g., after the batch jobs are completed). This ensures that you're only paying for the resources you actually need, avoiding over-provisioning and reducing costs.