Remove duplicate file names from a listing, disregarding directory
There's an expressive solution using pure bash built ins.
With associative arrays as sets, you can do it by continuously checking if the key is already being used, in which case you simply continue the loop.
# We will have a set which will contain existing filenames as keys.declare -A fileSetwhile read fullPath; do fileName="${fullPath##*/}" # basename if [ ! -n "${fileSet[$fileName]}" ]; then # If the file is not already in the set. echo $fullPath >> $FILEOUTPUT fileSet[$fileName]=1 fidone < $FILEINPUT
With awk, you could do:
awk -F\/ '{ path=""; if ( path1[$NF] == "" ) { print $0;path1[$NF]=$0 } }' filename
We build a variable path within awk. The filename is represented by $NF (the last field separated by /). We build an array of filenames (path1) with their associated paths. With each record/line in the file, this array is referenced to check if there is a path entry for the filename. If there is an entry, the record is ignored, hence stopping any duplication, otherwise the path is printed