Linux - join 2 CSV files
You can use the join
command like this:
join -t, -1 3 -2 1 -o 1.1 1.2 1.3 1.4 2.3 <(sort -t, -k 3,3 file1) <(sort file2)
It first sorts the files (file1
is sorted by the 3rd field) and then joins them using the 3rd field of file1
and the 1st field of file2
. It then outputs the fields you need.
Seems to be a job for SQLite. Using the SQLite shell:
create table f1(id,user_id,message_id,rate); create table f2(id,type,timestamp); .separator , .import 'file_1.txt' f1 .import 'file_2.txt' f2 CREATE INDEX i1 ON f1(message_id ASC); -- optional CREATE INDEX i2 ON f2(id ASC); -- optional .output 'output.txt' .separator , SELECT f1.id, f1.user_id, f1.message_id, f1.rate, f2.timestamp FROM f1 JOIN f2 ON f2.id = f1.message_id; .output stdout .q
Note that if there is a single error in the number of commas in a single line the import stage will fail. You can prevent the rest of the script from running with .bail on
at the script beginning.
If you want unmatched ids you can try:
SELECT f1.* FROM f1 LEFT JOIN f2 on f2.id = f1.message_id WHERE f2.id IS NULL
Which will select every row from f1
for which no corresponding row in f2
has been found.
You can try this:
1. Change all lines to start with the key:
awk -F',' { print $3 " file1 " $1 " " $2 " " $4 } < file1 > tempawk -F',' { print $1 " file2 " $2 " " $3 } < file2 >> temp
Now the lines look like:
message_id file1 id user_id rateid file2 type timestamp
Sort
temp
by the first two columns. Now related lines are adjacent, withfile1
firstsort -k 1,1 -k 2,2 < temp > temp2
Run
awk
to read the lines. Infile1
lines save the fields, infile2
lines print them.