Capture stdout and stderr into different variables Capture stdout and stderr into different variables shell shell

Capture stdout and stderr into different variables


Ok, it got a bit ugly, but here is a solution:

unset t_std t_erreval "$( (echo std; echo err >&2) \        2> >(readarray -t t_err; typeset -p t_err) \         > >(readarray -t t_std; typeset -p t_std) )"

where (echo std; echo err >&2) needs to be replaced by the actual command. Output of stdout is saved into the array $t_std line by line omitting the newlines (the -t) and stderr into $t_err.

If you don't like arrays you can do

unset t_std t_erreval "$( (echo std; echo err >&2 ) \        2> >(t_err=$(cat); typeset -p t_err) \         > >(t_std=$(cat); typeset -p t_std) )"

which pretty much mimics the behavior of var=$(cmd) except for the value of $? which takes us to the last modification:

unset t_std t_err t_reteval "$( (echo std; echo err >&2; exit 2 ) \        2> >(t_err=$(cat); typeset -p t_err) \         > >(t_std=$(cat); typeset -p t_std); t_ret=$?; typeset -p t_ret )"

Here $? is preserved into $t_ret

Tested on Debian wheezy using GNU bash, Version 4.2.37(1)-release (i486-pc-linux-gnu).


This is for catching stdout and stderr in different variables. If you only want to catch stderr, leaving stdout as-is, there is a better and shorter solution.

To sum everything up for the benefit of the reader, here is an

Easy Reusable bash Solution

This version does use subshells and runs without tempfiles. (For a tempfile version which runs without subshells, see my other answer.)

: catch STDOUT STDERR cmd args..catch(){eval "$({__2="$(  { __1="$("${@:3}")"; } 2>&1;  ret=$?;  printf '%q=%q\n' "$1" "$__1" >&2;  exit $ret  )";ret="$?";printf '%s=%q\n' "$2" "$__2" >&2;printf '( exit %q )' "$ret" >&2;} 2>&1 )";}

Example use:

dummy(){echo "$3" >&2echo "$2" >&1return "$1"}catch stdout stderr dummy 3 $'\ndiffcult\n data \n\n\n' $'\nother\n difficult \n  data  \n\n'printf 'ret=%q\n' "$?"printf 'stdout=%q\n' "$stdout"printf 'stderr=%q\n' "$stderr"

this prints

ret=3stdout=$'\ndiffcult\n data 'stderr=$'\nother\n difficult \n  data  '

So it can be used without deeper thinking about it. Just put catch VAR1 VAR2 in front of any command args.. and you are done.

Some if cmd args..; then will become if catch VAR1 VAR2 cmd args..; then. Really nothing complex.

Addendum: Use in "strict mode"

catch works for me identically in strict mode. The only caveat is, that the example above returns error code 3, which, in strict mode, calls the ERR trap. Hence if you run some command under set -e which is expected to return arbitrary error codes (not only 0), you need to catch the return code into some variable like && ret=$? || ret=$? as shown below:

dummy(){echo "$3" >&2echo "$2" >&1return "$1"}catch stdout stderr dummy 3 $'\ndifficult\n data \n\n\n' $'\nother\n difficult \n  data  \n\n' && ret=$? || ret=$?printf 'ret=%q\n' "$ret"printf 'stdout=%q\n' "$stdout"printf 'stderr=%q\n' "$stderr"

Discussion

Q: How does it work?

It just wraps ideas from the other answers here into a function, such that it can easily be resused.

catch() basically uses eval to set the two variables. This is similar to https://stackoverflow.com/a/18086548

Consider a call of catch out err dummy 1 2a 3b:

  • let's skip the eval "$({ and the __2="$( for now. I will come to this later.

  • __1="$("$("${@:3}")"; } 2>&1; executes dummy 1 2a 3b and stores its stdout into __1 for later use. So __1 becomes 2a. It also redirects stderr of dummy to stdout, such that the outer catch can gather stdout

  • ret=$?; catches the exit code, which is 1

  • printf '%q=%q\n' "$1" "$__1" >&2; then outputs out=2a to stderr. stderr is used here, as the current stdout already has taken over the role of stderr of the dummy command.

  • exit $ret then forwards the exit code (1) to the next stage.

Now to the outer __2="$( ... )":

  • This catches stdout of the above, which is the stderr of the dummy call, into variable __2. (We could re-use __1 here, but I used __2 to make it less confusing.). So __2 becomes 3b

  • ret="$?"; catches the (returned) return code 1 (from dummy) again

  • printf '%s=%q\n' "$2" "$__2" >&2; then outputs err=3a to stderr. stderr is used again, as it already was used to output the other variable out=2a.

  • printf '( exit %q )' "$ret" >&2; then outputs the code to set the proper return value. I did not find a better way, as assigning it to a variable needs a variable name, which then cannot be used as first or second argument to catch.

Please note that, as an optimization, we could have written those 2 printf as a single one like printf '%s=%q\n( exit %q ) "$__2" "$ret"` as well.

So what do we have so far?

We have following written to stderr:

out=2aerr=3b( exit 1 )

where out is from $1, 2a is from stdout of dummy, err is from $2, 3b is from stderr of dummy, and the 1 is from the return code from dummy.

Please note that %q in the format of printf takes care for quoting, such that the shell sees proper (single) arguments when it comes to eval. 2a and 3b are so simple, that they are copied literally.

Now to the outer eval "$({ ... } 2>&1 )";:

This executes all of above which output the 2 variables and the exit, catches it (therefor the 2>&1) and parses it into the current shell using eval.

This way the 2 variables get set and the return code as well.

Q: It uses eval which is evil. So is it safe?

  • As long as printf %q has no bugs, it should be safe. But you always have to be very careful, just think about ShellShock.

Q: Bugs?

  • No obvious bugs are known, except following:

    • Catching big output needs big memory and CPU, as everything goes into variables and needs to be back-parsed by the shell. So use it wisely.

    • As usual $(echo $'\n\n\n\n') swallows all linefeeds, not only the last one. This is a POSIX requirement. If you need to get the LFs unharmed, just add some trailing character to the output and remove it afterwards like in following recipe (look at the trailing x which allows to read a softlink pointing to a file which ends on a $'\n'):

          target="$(readlink -e "$file")x"    target="${target%x}"
    • Shell-variables cannot carry the byte NUL ($'\0'). They are simply ignores if they happen to occur in stdout or stderr.

  • The given command runs in a sub-subshell. So it has no access to $PPID, nor can it alter shell variables. You can catch a shell function, even builtins, but those will not be able to alter shell variables (as everything running within $( .. ) cannot do this). So if you need to run a function in current shell and catch it's stderr/stdout, you need to do this the usual way with tempfiles. (There are ways to do this such, that interrupting the shell normally does not leave debris behind, but this is complex and deserves it's own answer.)

Q: Bash version?

  • I think you need Bash 4 and above (due to printf %q)

Q: This still looks so awkward.

  • Right. Another answer here shows how it can be done in ksh much more cleanly. However I am not used to ksh, so I leave it to others to create a similar easy to reuse recipe for ksh.

Q: Why not use ksh then?

  • Because this is a bash solution

Q: The script can be improved

  • Of course you can squeeze out some bytes and create smaller or more incomprehensible solution. Just go for it ;)

Q: There is a typo. : catch STDOUT STDERR cmd args.. shall read # catch STDOUT STDERR cmd args..

  • Actually this is intended. : shows up in bash -x while comments are silently swallowed. So you can see where the parser is if you happen to have a typo in the function definition. It's an old debugging trick. But beware a bit, you can easily create some neat sideffects within the arguments of :.

Edit: Added a couple more ; to make it more easy to create a single-liner out of catch(). And added section how it works.


Technically, named pipes aren't temporary files and nobody here mentions them. They store nothing in the filesystem and you can delete them as soon as you connect them (so you won't ever see them):

#!/bin/bash -efoo () {    echo stdout1    echo stderr1 >&2    sleep 1    echo stdout2    echo stderr2 >&2}rm -f stdout stderrmkfifo stdout stderrfoo >stdout 2>stderr &             # blocks until reader is connectedexec {fdout}<stdout {fderr}<stderr # unblocks `foo &`rm stdout stderr                   # filesystem objects are no longer neededstdout=$(cat <&$fdout)stderr=$(cat <&$fderr)echo $stdoutecho $stderrexec {fdout}<&- {fderr}<&- # free file descriptors, optional

You can have multiple background processes this way and asynchronously collect their stdouts and stderrs at a convenient time, etc.

If you need this for one process only, you may just as well use hardcoded fd numbers like 3 and 4, instead of the {fdout}/{fderr} syntax (which finds a free fd for you).