Prevent R from using virtual memory on unix/linux?
When you run system("ulimit")
that is executing in a child process. The parent does not inherit the ulimit
from the parent. (This is analgous to doing system("cd dir")
, or system("export ENV_VAR=foo")
.
Setting it in the shell from which you launch the environment is the correct way. The limit is not working in the parallel case most likely because it is a per-process limit, not a global system limit.
On Linux you can configure strict(er) overcommit accounting which tries to prevent the kernel from handling out a mmap
request that cannot be backed by physical memory.
This is done by tuning the sysctl parameters vm.overcommit_memory
and vm.overcommit_ratio
. (Google about these.)
This can be an effective way to prevent thrashing situations. But the tradeoff is that you lose the benefit that overcommit provides when things are well-behaved (cramming more/larger processes into memory).