OpenJDK Client VM - Cannot allocate memory OpenJDK Client VM - Cannot allocate memory hadoop hadoop

OpenJDK Client VM - Cannot allocate memory


make sure you have swap space on your machine

ubuntu@VM-ubuntu:~$ free -m             total       used       free     shared    buffers     cachedMem:           994        928         65          0          1         48-/+ buffers/cache:        878        115Swap:         4095       1086       3009

notice the Swap line.

I just encountered this problem on an Elastic Computing instance. Turned out swap space is not mounted by default.


You can try to increase the memory allocation size by passing these Runtime Parameters.

For example:

java -Xms1024M -Xmx2048M -jar application.jar
  • Xmx is the maximum size
  • Xms is the minimum size


There can be a container memory overflow with the parameters that you are using for the JVM

Check if the attributes:

yarn.nodemanager.resource.memory-mbyarn.scheduler.minimum-allocation-mbyarn.scheduler.maximum-allocation-mb

on yarn.xml matches the desired value.

For more memory reference, read the:

HortonWorks memory reference

Similar problem

Note: This is for Hadoop 2.0, if you are running hadoop 1.0 check the Task attributes.