*nix: perform set union/intersection/difference of lists *nix: perform set union/intersection/difference of lists bash bash

*nix: perform set union/intersection/difference of lists


Union: sort -u files...

Intersection: sort files... | uniq -d

Overall difference (elements which are just in one of the files):
sort files... | uniq -u

Mathematical difference (elements only once in one of the files):
sort files... | uinq -u | sort - <(sort -u fileX ) | uniq -d

The first two commands get me all unique elements. Then we merge this with the file we're interested in. Command breakdown for sort - <(sort -u fileX ):

The - will process stdin (i.e. the list of all unique elements).

<(...) runs a command, writes the output in a temporary file and passes the path to the file to the command.

So this gives is a mix of all unique elements plus all unique elements in fileX. The duplicates are then the unique elements which are only in fileX.


If you want to get the common lines between two files, you can use the comm utility.

A.txt :

ABC

B.txt

ABD

and then, using comm will give you :

$ comm <(sort A.txt) <(sort B.txt)        A        BC    D

In the first column, you have what is in the first file and not in the second.

In the second column, you have what is in the second file and not in the first.

In the third column, you have what is in the both files.


If you don't mind using a bit of Perl, and if your file sizes are reasonable such that they can be written into a hash, you could collect the files into two hashes to do:

#...get common keys in an array...my @both_thingsfor (keys %from_1) {    push @both_things, $_ if exists $from_2{$_};}#...put unique things in an array...my @once_onlyfor (keys %from_1) {    push @once_only, $_ unless exists $from_2($_);}