How to write to stdout AND to log file simultaneously with Popen?
You can use a pipe to read the data from the program's stdout and write it to all the places you want:
import sysimport subprocesslogfile = open('logfile', 'w')proc=subprocess.Popen(['cat', 'file'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)for line in proc.stdout: sys.stdout.write(line) logfile.write(line)proc.wait()
UPDATE
In python 3, the universal_newlines
parameter controls how pipes are used. If False
, pipe reads return bytes
objects and may need to be decoded (e.g., line.decode('utf-8')
) to get a string. If True
, python does the decode for you
Changed in version 3.3: When universal_newlines is True, the class uses the encoding locale.getpreferredencoding(False) instead of locale.getpreferredencoding(). See the io.TextIOWrapper class for more information on this change.
To emulate: subprocess.call("command 2>&1 | tee -a logfile", shell=True)
without invoking the tee
command:
#!/usr/bin/env python2from subprocess import Popen, PIPE, STDOUTp = Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1)with p.stdout, open('logfile', 'ab') as file: for line in iter(p.stdout.readline, b''): print line, #NOTE: the comma prevents duplicate newlines (softspace hack) file.write(line)p.wait()
To fix possible buffering issues (if the output is delayed), see links in Python: read streaming input from subprocess.communicate().
Here's Python 3 version:
#!/usr/bin/env python3import sysfrom subprocess import Popen, PIPE, STDOUTwith Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1) as p, \ open('logfile', 'ab') as file: for line in p.stdout: # b'\n'-separated lines sys.stdout.buffer.write(line) # pass bytes as is file.write(line)
Write to terminal byte by byte for interactive applications
This method write any bytes it gets to stdout immediately, which more closely simulates the behavior of tee
, especially for interactive applications.
main.py
#!/usr/bin/env python3import osimport subprocessimport syswith subprocess.Popen(sys.argv[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as proc, \ open('logfile.txt', 'bw') as logfile: while True: byte = proc.stdout.read(1) if byte: sys.stdout.buffer.write(byte) sys.stdout.flush() logfile.write(byte) # logfile.flush() else: breakexit_status = proc.returncode
sleep.py
#!/usr/bin/env python3import sysimport timefor i in range(10): print(i) sys.stdout.flush() time.sleep(1)
First we can do a non-interactive sanity check:
./main.py ./sleep.py
And we see it counting to stdout on real time.
Next, for an interactive test, you can run:
./main.py bash
Then the characters you type appear immediately on the terminal as you type them, which is very important for interactive applications. This is what happens when you run:
bash | tee logfile.txt
Also, if you want the output to show on the ouptut file immediately, then you can also add a:
logfile.flush()
but tee
does not do this, and I'm afraid it would kill performance. You can test this out easily with:
tail -f logfile.txt
Related question: live output from subprocess command
Tested on Ubuntu 18.04, Python 3.6.7.