Container is running beyond virtual memory limits Container is running beyond virtual memory limits hadoop hadoop

Container is running beyond virtual memory limits


I got almost same error while running a Spark application on YARN cluster.

"Container [pid=791,containerID=container_1499942756442_0001_02_000001] is running beyond virtual memory limits. Current usage: 135.4 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container."

I resolved it by disabling virtual memory check in the file yarn-site.xml

<property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property>

This one setting was enough in my case.


I referred below site.http://crazyadmins.com/tag/tuning-yarn-to-get-maximum-performance/

Then I got to know that I can change memory allocation of mapreduce.

I changed mapred-site.xml

<configuration>        <property>                <name>mapreduce.framework.name</name>                <value>yarn</value>        </property>        <property>                <name>mapreduce.map.memory.mb</name>                <value>2000</value>        </property>        <property>                <name>mapreduce.reduce.memory.mb</name>                <value>2000</value>        </property>        <property>                <name>mapreduce.map.java.opts</name>                <value>1600</value>        </property>        <property>                <name>mapreduce.reduce.java.opts</name>                <value>1600</value>        </property></configuration>