Launch a shell command with in a python script, wait for the termination and return to the script
subprocess: The
subprocess
module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
http://docs.python.org/library/subprocess.html
Usage:
import subprocessprocess = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)process.wait()print process.returncode
You can use subprocess.Popen
. There's a few ways to do it:
import subprocesscmd = ['/run/myscript', '--arg', 'value']p = subprocess.Popen(cmd, stdout=subprocess.PIPE)for line in p.stdout: print linep.wait()print p.returncode
Or, if you don't care what the external program actually does:
cmd = ['/run/myscript', '--arg', 'value']subprocess.Popen(cmd).wait()
The subprocess module has come along way since 2008. In particular check_call
and check_output
make simple subprocess stuff even easier. The check_*
family of functions are nice it that they raise an exception if something goes wrong.
import osimport subprocessfiles = os.listdir('.')for f in files: subprocess.check_call( [ 'myscript', f ] )
Any output generated by myscript
will display as though your process produced the output (technically myscript
and your python script share the same stdout). There are a couple of ways to avoid this.
check_call( [ 'myscript', f ], stdout=subprocess.PIPE )
The stdout will be supressed (beware ifmyscript
produces more that 4k of output). stderr will still be shown unless you add the optionstderr=subprocess.PIPE
.check_output( [ 'myscript', f ] )
check_output
returns the stdout as a string so it isnt shown. stderr is still shown unless you add the optionstderr=subprocess.STDOUT
.