Remove duplicate file names from a listing, disregarding directory Remove duplicate file names from a listing, disregarding directory unix unix

Remove duplicate file names from a listing, disregarding directory


Short awk solution:

awk -F'/' '!a[$NF]++' file
  • -F'/' - treating / as field separator

  • !a[$NF]++ - ensures output for only unique filenames (contained in the last column $NF)

The output:

/path/to/number1/file1.txt/path/to/number1/file2.txt/path/to/number1/file3.txt


There's an expressive solution using pure bash built ins.

With associative arrays as sets, you can do it by continuously checking if the key is already being used, in which case you simply continue the loop.

# We will have a set which will contain existing filenames as keys.declare -A fileSetwhile read fullPath; do     fileName="${fullPath##*/}" # basename    if [ ! -n "${fileSet[$fileName]}" ]; then # If the file is not already in the set.        echo $fullPath >> $FILEOUTPUT        fileSet[$fileName]=1    fidone < $FILEINPUT


With awk, you could do:

awk -F\/ '{ path=""; if ( path1[$NF] == "" ) { print $0;path1[$NF]=$0 } }' filename

We build a variable path within awk. The filename is represented by $NF (the last field separated by /). We build an array of filenames (path1) with their associated paths. With each record/line in the file, this array is referenced to check if there is a path entry for the filename. If there is an entry, the record is ignored, hence stopping any duplication, otherwise the path is printed