Provide CPU time and memory to subprocess
You can't limit the resource utilization of an arbitrary docker exec
process.
Docker uses a client/server model, so when you run docker exec
it's just making a request to the Docker daemon. When you try to to setrlimit
to limit the subprocess's memory, it only limits the docker exec
process itself; but that makes a request to the Docker daemon, which in turn launches a new process in the container namespace. None of these processes are children of each other, and none of them beyond the original docker exec
inherit these resource limits.
If instead you launch a new container, you can use Docker's resource limits on the new container. These don't limit the absolute amount of CPU time, but you probably want to limit the runtime of the launched process in any case.
You should generally avoid using the subprocess
module to invoke docker
commands. Constructing shell commands and consuming their output can be tricky, and if your code isn't perfect, it's very easy to use a shell-injection attack to use the docker
command to root the host. Use something like the Docker SDK for Python instead.
So, if you wanted to launch a new container, with a fixed memory limit, and to limit its execution time, you could do that with something like:
import dockerimport requestsclient = docker.from_env()container = client.containers.run( image='some/image:tag', command=['the', 'command', 'to', 'run'], detach=True, mem_limit=10485760 # 10 MiB)try: container.wait(timeout=30) # secondsexcept requests.exceptions.ReadTimeout: # container ran over its time allocation container.kill() container.wait()print(container.logs())container.remove()