Efficient method for parallel processing in Bash /Shell ? Efficient method for parallel processing in Bash /Shell ? unix unix

Efficient method for parallel processing in Bash /Shell ?


You are looking for GNU Parallel:

cat Input.txt | parallel -j 100 python status_check.py > out_file.txt

GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.

If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:

Simple scheduling

GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:

GNU Parallel scheduling

Installation

If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:

$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \   fetch -o - http://pi.dk/3 ) > install.sh$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a12345678 883c667e 01eed62f 975ad28b 6d50e22a$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0cc21b4c9 43fd03e9 3ae1ae49 e28573c0$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35$ bash install.sh

For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README

Learn more

See more examples: http://www.gnu.org/software/parallel/man.html

Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html

Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel


Put an ampersand after your $1 and it will run each "concurrently"

Bash is probably not the right tool to do this. Each fork is very expensive resource-wise. You'd be better off using Ruby or Python, reading this into an array and then processing it inside the interpreter's VM.


Why not alter your python script to read the URLs itself and then distribute the processing?

It seems a bit pointless having a bash for-loop when you could just do that in python.

There are a number of modules in python for handling parallel processing listed here.