Why can't I use job control in a bash script? Why can't I use job control in a bash script? bash bash

Why can't I use job control in a bash script?


What he meant is that job control is by default turned off in non-interactive mode (i.e. in a script.)

From the bash man page:

JOB CONTROL       Job  control refers to the ability to selectively stop (suspend)       the execution of processes and continue (resume) their execution at a       later point.       A user typically employs this facility via an interactive interface       supplied jointly by the system’s terminal driver and bash.

and

   set [--abefhkmnptuvxBCHP] [-o option] [arg ...]      ...      -m      Monitor mode.  Job control is enabled.  This option is on by              default for interactive shells on systems that support it (see              JOB CONTROL above).  Background processes run in a separate              process group and a line containing their exit status  is              printed  upon  their completion.

When he said "is stupid" he meant that not only:

  1. is job control meant mostly for facilitating interactive control (whereas a script can work directly with the pid's), but also
  2. I quote his original answer, ... relies on the fact that you didn't start any other jobs previously in the script which is a bad assumption to make. Which is quite correct.

UPDATE

In answer to your comment: yes, nobody will stop you from using job control in your bash script -- there is no hard case for forcefully disabling set -m (i.e. yes, job control from the script will work if you want it to.) Remember that in the end, especially in scripting, there always are more than one way to skin a cat, but some ways are more portable, more reliable, make it simpler to handle error cases, parse the output, etc.

You particular circumstances may or may not warrant a way different from what lhunath (and other users) deem "best practices".


Job control with bg and fg is useful only in interactive shells. But & in conjunction with wait is useful in scripts too.

On multiprocessor systems spawning background jobs can greatly improve the script's performance, e.g. in build scripts where you want to start at least one compiler per CPU, or process images using ImageMagick tools parallely etc.

The following example runs up to 8 parallel gcc's to compile all source files in an array:

#!bash...for ((i = 0, end=${#sourcefiles[@]}; i < end;)); do    for ((cpu_num = 0; cpu_num < 8; cpu_num++, i++)); do        if ((i < end)); then gcc ${sourcefiles[$i]} & fi    done    waitdone

There is nothing "stupid" about this. But you'll require the wait command, which waits for all background jobs before the script continues. The PID of the last background job is stored in the $! variable, so you may also wait ${!}. Note also the nice command.

Sometimes such code is useful in makefiles:

buildall:    for cpp_file in *.cpp; do gcc -c $$cpp_file & done; wait

This gives much finer control than make -j.

Note that & is a line terminator like ; (write command& not command&;).

Hope this helps.


Job control is useful only when you are running an interactive shell, i.e., you know that stdin and stdout are connected to a terminal device (/dev/pts/* on Linux). Then, it makes sense to have something on foreground, something else on background, etc.

Scripts, on the other hand, doesn't have such guarantee. Scripts can be made executable, and run without any terminal attached. It doesn't make sense to have foreground or background processes in this case.

You can, however, run other commands non-interactively on the background (appending "&" to the command line) and capture their PIDs with $!. Then you use kill to kill or suspend them (simulating Ctrl-C or Ctrl-Z on the terminal, it the shell was interactive). You can also use wait (instead of fg) to wait for the background process to finish.