Parallel Iterating IP Addresses in Bash Parallel Iterating IP Addresses in Bash bash bash

Parallel Iterating IP Addresses in Bash


Small Scale - iterate

for a smaller IP address span it would probably be recommended to iterate like this:

for ip in 192.168.1.{1..10}; do ...

As stated in this similar question.


Big Scale - parallel !

Given that your problem deals with a huge IP address span you should probably consider a different approach.

This begs for the use of gnu parallel.

Parallel iterating a big span of IP addresses in bash using gnu parallel requires splitting the logic to several files (for the parallel command to use).

ip2int

#!/bin/bashset -efunction ip_to_int(){  local IP="$1"  local A=$(echo $IP | cut -d. -f1)  local B=$(echo $IP | cut -d. -f2)  local C=$(echo $IP | cut -d. -f3)  local D=$(echo $IP | cut -d. -f4)  local INT  INT=$(expr 256 "*" 256 "*" 256 "*" $A)  INT=$(expr 256 "*" 256 "*" $B + $INT)  INT=$(expr 256 "*" $C + $INT)  INT=$(expr $D + $INT)  echo $INT}function int_to_ip(){  local INT="$1"  local D=$(expr $INT % 256)  local C=$(expr '(' $INT - $D ')' / 256 % 256)  local B=$(expr '(' $INT - $C - $D ')' / 65536 % 256)  local A=$(expr '(' $INT - $B - $C - $D ')' / 16777216 % 256)  echo "$A.$B.$C.$D"}



scan_ip

#!/bin/bashset -esource ip2intif [[ $# -ne 1 ]]; then    echo "Usage: $(basename "$0") ip_address_number"    exit 1fiCONNECT_TIMEOUT=2 # in secondsIP_ADDRESS="$(int_to_ip ${1})"set +edata=$(curl --head -vs -m ${CONNECT_TIMEOUT} https://${IP_ADDRESS}:443 2>&1)exit_code="$?"data=$(echo -e "${data}" | grep "Server: ")     # wasn't sure what are you looking for in your serversset -eif [[ ${exit_code} -eq 0 ]]; then    if [[ -n "${data}" ]]; then        echo "${IP_ADDRESS} - ${data}"    else        echo "${IP_ADDRESS} - Got empty data for server!"    fielse    echo "${IP_ADDRESS} - no server."fi



scan_range

#!/bin/bashset -esource ip2intSTART_ADDRESS="10.0.0.0"NUM_OF_ADDRESSES="16777216" # 256 * 256 * 256start_address_num=$(ip_to_int ${START_ADDRESS})end_address_num=$(( start_address_num + NUM_OF_ADDRESSES ))seq ${start_address_num} ${end_address_num} | parallel -P0 ./scan_ip# This parallel call does the same as this:## for ip_num in $(seq ${start_address_num} ${end_address_num}); do#     ./scan_ip ${ip_num}# done## only a LOT faster!


Improvement from the iterative approach:

The run time of the naive for loop (which is estimated to take 200 days for 256*256*256 addresses) was improved to under a day according to @skrskrskr.


Shorter:

mycurl() {    curl --head https://${1}:443 | grep -iE "(Server\:\ Target)" > ${1}_info.txt;  }export -f mycurlparallel -j0 --tag mycurl {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255}

Slightly different using --tag instead of many _info.txt-files:

parallel -j0 --tag curl --head https://{1}.{2}.{3}.{4}:443 ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | grep -iE "(Server\:\ Target)" > info.txt

Fan out to run more than 500 in parallel:

parallel echo {1}.{2}.{3}.{4} ::: {10..10} ::: {0..255} ::: {0..255} ::: {0..255} | \  parallel -j100 --pipe -N1000 --load 100% --delay 1 parallel -j250 --tag -I ,,,, curl --head https://,,,,:443 | grep -iE "(Server\:\ Target)" > info.txt

This will spawn up to 100*250 jobs, but will try to find the optimal number of jobs where there is no idle time for any of the CPUs. On my 8 core system that is 7500. Make sure you have RAM enough to run the potential max (25000 in this case).