Parallelizing a while loop with arrays read from a file in bash Parallelizing a while loop with arrays read from a file in bash bash bash

Parallelizing a while loop with arrays read from a file in bash


From https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Use-a-table-as-input:

"""
Content of table_file.tsv:

foo<TAB>barbaz <TAB> quux

To run:

cmd -o bar -i foocmd -o quux -i baz

you can run:

parallel -a table_file.tsv --colsep '\t' cmd -o {2} -i {1}

"""

So in your case it will be:

cat fileinput | parallel --colsep '\t' myprogram {1} {2} {1}_vs_{2}.result


I'd like @chepner hack.And it seems not so tricky accomplish similar behaviour with limiting number of parallel executions:

while IFS=$'\t' read -r f1 f2;do    myprogram "$f1" "$f2" "${f1}_vs_${f2}.result" &    # At most as number of CPU cores    [ $( jobs | wc -l ) -ge $( nproc ) ] && waitdone < fileinputwait

It limit execution at max of number of CPU cores present on system. You may easily vary that by replace $( nproc ) by desired amount.

Meantime you should understand what it is not honest distribution. So, it not start new thread just after one finished. Instead it just wait finishing all, after start max amount. So summary throughput may be slightly less than with parallel. Especially if run time of your program may vary in big range. If time spent on each invocation is almost same then summary time also should be roughly equivalent.


parallel isn't strictly necessary here; just start all the processes in the background, then wait for them to complete. The array is also unnecessary, as you can give read more than one variable to populate:

while IFS=$'\t' read -r f1 f2;do    myprogram "$f1" "$f2" "${f1}_vs_${f2}.result" &done < fileinputwait

This does start a single job for every item in your list, whereas parallel can limit the number of jobs running at once. You can accomplish the same in bash, but it's tricky.