Checking Bash exit status of several commands efficiently
You can write a function that launches and tests the command for you. Assume command1
and command2
are environment variables that have been set to a command.
function mytest { "$@" local status=$? if (( status != 0 )); then echo "error with $1" >&2 fi return $status}mytest "$command1"mytest "$command2"
What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do
set -e # DON'T do this. See commentary below.
at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:
#!/bin/shset -e # Use caution. eg, don't do thiscommand1command2command3
and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)
As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.
However, I no longer consider this to be a good practice. set -e
has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok
The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:
#!/bin/sh command1 || exit command2 || exit command3 || exit
or
#!/bin/shcommand1 && command2 && command3
I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions
to print green [ OK ]
and red [FAILED]
status indicators.
You can optionally set the $LOG_STEPS
variable to a log file name if you want to log which commands fail.
Usage
step "Installing XFS filesystem tools:"try rpm -i xfsprogs-*.rpmnextstep "Configuring udev:"try cp *.rules /etc/udev/rules.dtry udevtriggernextstep "Adding rc.postsysinit hook:"try cp rc.postsysinit /etc/rc.d/try ln -s rc.d/rc.postsysinit /etc/rc.postsysinittry echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinitnext
Output
Installing XFS filesystem tools: [ OK ]Configuring udev: [FAILED]Adding rc.postsysinit hook: [ OK ]
Code
#!/bin/bash. /etc/init.d/functions# Use step(), try(), and next() to perform a series of commands and print# [ OK ] or [FAILED] at the end. The step as a whole fails if any individual# command fails.## Example:# step "Remounting / and /boot as read-write:"# try mount -o remount,rw /# try mount -o remount,rw /boot# nextstep() { echo -n "$@" STEP_OK=0 [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$}try() { # Check for `-b' argument to run command in the background. local BG= [[ $1 == -b ]] && { BG=1; shift; } [[ $1 == -- ]] && { shift; } # Run the command. if [[ -z $BG ]]; then "$@" else "$@" & fi # Check if command failed and update $STEP_OK if so. local EXIT_CODE=$? if [[ $EXIT_CODE -ne 0 ]]; then STEP_OK=$EXIT_CODE [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$ if [[ -n $LOG_STEPS ]]; then local FILE=$(readlink -m "${BASH_SOURCE[1]}") local LINE=${BASH_LINENO[0]} echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS" fi fi return $EXIT_CODE}next() { [[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; } [[ $STEP_OK -eq 0 ]] && echo_success || echo_failure echo return $STEP_OK}