Spark driver pod getting killed with 'OOMKilled' status Spark driver pod getting killed with 'OOMKilled' status kubernetes kubernetes

Spark driver pod getting killed with 'OOMKilled' status


In brief, the executor memory consists of three parts:

  • Reversed memory (300MB)
  • User memory ((all - 300MB)*0.4), used for data processing logic.
  • Spark memory ((all-300MB)*0.6(spark.memory.fraction)), used for cache and shuffle in Spark.

Besides this, there is also max(executor memory * 0.1, 384MB)(0.1 is spark.kubernetes.memoryOverheadFactor) extra memory used by non-JVM memory in K8s.

Adding executor memory limit by memory overhead in K8S should fix the OOM.

You can also decrease spark.memory.fraction to allocate more RAM to user memory.