Fastest way to find lines of a file from another larger file in Bash Fastest way to find lines of a file from another larger file in Bash bash bash

Fastest way to find lines of a file from another larger file in Bash


A Perl solution.   [See Note below.]

Use a hash for the first file. As you read the big file line-by-line, extract the field by regex (captures the first pattern between ||) or split (gets the second word) and print if it exists. They likely differ in speed a bit (time them). The defined check isn't needed in the regex while for split use // (defined-or) that short-circuits.

use warnings;use strict;# If 'prog smallfile bigfile' is the preferred usedie "Usage: $0 smallfile bigfile\n"  if @ARGV != 2;my ($smallfile, $bigfile) = @ARGV;open my $fh, '<', $smallfile or die "Can't open $smallfile: $!";    my %word = map { chomp; $_ => 1 } <$fh>;open    $fh, '<', $bigfile or die "Can't open $bigfile: $!";       while (<$fh>) {    exists $word{ (/\|([^|]+)/)[0] } && print;      # Or    #exists $word{ (split /\|/)[1] // '' } && print;}close $fh;

Avoiding the if branch and using short-circuit is faster, but only very little. On billions of lines these tweaks add up but again not to too much. It may (or may not) be a tad bit faster to read the small file line by line, instead of in list context as above, but this should not be noticeable.

Update   Writing to STDOUT saves two operations and I repeatedly time it to be a little faster than writing to a file. Such usage is also consistent with most UNIX tools so I changed to write to STDOUT. Next, the exists test is not needed and dropping it spares an operation. However, I consistently get a touch better runtimes with it, while it also conveys the purpose better. Altogether I am leaving it in. Thanks to ikegami for comments.

Note   The commented out version is about 50% faster than the other, by my benchmark below. These are both given because they are different, one finding the first match and the other the second field. I am keeping it this way as a more generic choice, since the question is ambiguous on that.


Some comparisons (benchmark)   [Updated for writing to STDOUT, see "Update" above]

There is an extensive analysis in the answer by HåkonHægland, timing one run of most solutions. Here is another take, benchmarking the two solutions above, the OP's own answer, and the posted fgrep one, expected to be fast and used in the question and in many answers.

I build test data in the following way. A handful of lines of the length roughly as shown are made with random words, for both files, so to match in the second field. Then I pad this "seed" for data samples with lines that won't match, so to mimic ratios between sizes and matches quoted by OP: for 14K lines in small file there are 1.3M lines in the big file, yielding 126K matches. Then these samples are written repeatedly to build full data files as OP's, shuffle-ed each time using List::Util.

All runs compared below produce 106_120 matches for the above file sizes (diff-ed for a check), so the matching frequency is close enough.They are benchmarked by calling complete programs using my $res = timethese(60 ...). The result of cmpthese($res) on v5.16 are

        Rate regex  cfor split fgrepregex 1.05/s    --  -23%  -35%  -44%cfor  1.36/s   30%    --  -16%  -28%split 1.62/s   54%   19%    --  -14%fgrep 1.89/s   80%   39%   17%    --

The fact that the optimized C-program fgrep comes on top isn't surprising. The lag of "regex" behind "split" may be due to the overhead of starting the engine for little matches, many times. This may vary over Perl versions, given the evolving regex engine optimizations. I include the answer of @codeforester ("cfor") since it was claimed to be fastest, and its 20% lag behind the very similar "split" is likely due to scattered small inefficiencies (see a comment below this answer).

This isn't shatteringly different, while there are sure variations across hardware and software and over data details. I ran this on different Perls and machines, and the notable difference is that in some cases fgrep was indeed an order of magnitude faster.

The OP's experience of very slow fgrep is surprising. Given their quoted run times, order of magnitude slower than the above, I'd guess that there's an old system to "blame."

Even though this is completely I/O based there are concurrency benefits from putting it on multiple cores and I'd expect a good speedup, up to a factor of a few.


Alas, the comment got deleted (?). In short: unneeded use of a scalar (costs), of an if branch, of defined, of printf instead of print (slow!). These matter for efficiency on 2 billion lines.


I have tried to do a comparison between some of the methods presented here.

First I created a Perl script to generate the input files file1.txt and file2.txt. In order to compare some of the solutions, I made sure that the the words from file1.txt only could appear in the second field in file2.txt. Also to be able to use the join solution presented by @GeorgeVasiliou, I sorted file1.txt and file2.txt. Currently I generated the input files based on only 75 random words ( taken from https://www.randomlists.com/random-words ). Only 5 of these 75 words was used in file1.txt the remaining 70 words was used to fill up the fields in file2.txt. It might be necessary to increase the number of words substantially to get realistic results ( according to the OP, the original file1.txt contained 14000 words). In the tests below I used a file2.txt with 1000000 ( 1 million ) lines. The script also generates the file regexp1.txt required by the grep solution of @BOC.

gen_input_files.pl:

#! /usr/bin/env perluse feature qw(say);use strict;use warnings;use Data::Printer;use Getopt::Long;GetOptions ("num_lines=i" => \my $nlines )  or die("Error in command line arguments\n");# Generated random words from site: https://www.randomlists.com/random-wordsmy $word_filename        = 'words.txt'; # 75 random wordsmy $num_match_words      = 5;my $num_file2_lines      = $nlines || 1_000_000;my $file2_words_per_line = 3;my $file2_match_field_no = 2;my $file1_filename       = 'file1.txt';my $file2_filename       = 'file2.txt';my $file1_regex_fn       = 'regexp1.txt';say "generating $num_file2_lines lines..";my ( $words1, $words2 ) = get_words( $word_filename, $num_match_words );write_file1( $file1_filename, $words2 );write_file2(    $file2_filename, $words1, $words2, $num_file2_lines,    $file2_words_per_line, $file2_match_field_no);write_BOC_regexp_file( $file1_regex_fn, $words2 );sub write_BOC_regexp_file {    my ( $fn, $words ) = @_;    open( my $fh, '>', $fn ) or die "Could not open file '$fn': $!";    print $fh '\\|' . (join "|", @$words) . '\\|';    close $fh;}sub write_file2 {    my ( $fn, $words1, $words2, $nlines, $words_per_line, $field_no ) = @_;    my $nwords1 = scalar @$words1;    my $nwords2 = scalar @$words2;    my @lines;    for (1..$nlines) {        my @words_line;        my $key;        for (1..$words_per_line) {            my $word;            if ( $_ != $field_no ) {                my $index = int (rand $nwords1);                $word = @{ $words1 }[$index];            }            else {                my $index = int (rand($nwords1 + $nwords2) );                if ( $index < $nwords2 ) {                    $word = @{ $words2 }[$index];                }                else {                    $word =  @{ $words1 }[$index - $nwords2];                }                $key = $word;            }            push @words_line, $word;        }        push @lines, [$key, (join "|", @words_line)];    }    @lines = map { $_->[1] } sort { $a->[0] cmp $b->[0] } @lines;     open( my $fh, '>', $fn ) or die "Could not open file '$fn': $!";    print $fh (join "\n", @lines);    close $fh;}sub write_file1 {    my ( $fn, $words ) = @_;    open( my $fh, '>', $fn ) or die "Could not open file '$fn': $!";    print $fh (join "\n", sort @$words);    close $fh;}sub get_words {    my ( $fn, $N ) = @_;    open( my $fh, '<', $fn ) or die "Could not open file '$fn': $!";    my @words = map {chomp $_; $_} <$fh>;    close $fh;    my @words1 = @words[$N..$#words];    my @words2 = @words[0..($N - 1)];    return ( \@words1, \@words2 );}

Next, I created a sub folder solutions with all the test cases:

$ tree solutions/solutions/├── BOC1│   ├── out.txt│   └── run.sh├── BOC2│   ├── out.txt│   └── run.sh├── codeforester│   ├── out.txt│   ├── run.pl│   └── run.sh[...]

Here the files out.txt is the output from the greps for each solution. The scripts run.sh runs the solution for the given test case.

Notes on the different solutions

  • BOC1 : First solution presented by @BOC

    grep -E -f regexp1.txt file2.txt
  • BOC2 : Second solution suggested by @BOC:

    LC_ALL=C grep -E -f regexp1.txt file2.txt
  • codeforester : Accepted Perl solution by @codeforester ( see source )

  • codeforester_orig : Original solution presented by @codeforested:

    fgrep -f file1.txt file2.txt
  • dawg : Python solution using dictionary and split line proposed by @dawg ( see source )

  • gregory1 : solution using Gnu Parallel suggested by @gregory

    parallel -k --pipepart -a file2.txt --block "$block_size" fgrep -F -f file1.txt

    See note below regarding how to choose $block_size.

  • hakon1 : Perl solution provided by @HåkonHægland (see source). This solution requires compilation of the c-extension the first time the code is run. It does not require recompilation when file1.txt or file2.txt changes. Note: The time used to compile the c-extension at the initial run is not included in the run times presented below.

  • ikegami : Solution using assembled regexp and using grep -P as given by @ikegami. Note: The assembled regexp was written to a separate file regexp_ikegami.txt, so the runtime of generating the regexp is not included in the comparison below. This is the code used:

    regexp=$(< "regexp_ikegami.txt")grep -P "$regexp" file2.txt
  • inian1 : First solution by @Inian using match()

    awk 'FNR==NR{    hash[$1]; next}{   for (i in hash) if (match($0,i)) {print; break}}' file1.txt FS='|' file2.txt
  • inian2 : Second solution by @Inian using index()

    awk 'FNR==NR{    hash[$1]; next}{   for (i in hash) if (index($0,i)) {print; break}}' file1.txt FS='|' file2.txt
  • inian3 : Third solution by @Inian checking only $2 field:

    awk 'FNR==NR{    hash[$1]; next}$2 in hash' file1.txt FS='|' file2.txt
  • inian4 : 4th soultion by @Inian ( basically the same as codeforester_orig with LC_ALL ) :

    LC_ALL=C fgrep -f file1.txt file2.txt
  • inian5 : 5th solution by @Inian (same as inian1 but with LC_ALL ):

    LC_ALL=C awk 'FNR==NR{    hash[$1]; next}{   for (i in hash) if (match($0,i)) {print; break}}' file1.txt FS='|' file2.txt
  • inian6 : Same as inian3 but with LC_ALL=C. Thanks to @GeorgeVasiliou for suggestion.

  • jjoao : Compiled flex-generated C code as proposed by @JJoao (see source ). Note: Recompilation of the exectuable must be done each time file1.txt changes. The time used to compile the executable is not included in the run times presented below.

  • oliv : Python script provided by @oliv ( see source )

  • Vasiliou : Using join as suggested by @GeorgeVasiliou:

    join --nocheck-order -11 -22 -t'|' -o 2.1 2.2 2.3 file1.txt file2.txt
  • Vasiliou2 : Same as Vasiliou but with LC_ALL=C.

  • zdim : Using Perl script provided by @zdim ( see source ). Note: This uses the regexp search version ( instead of split line solution ).

  • zdim2 : Same as zdim except that it uses the split function instead of regexp search for the field in file2.txt.

Notes

  1. I experimented a little bit with Gnu parallel (see gregory1 solution above) to determine the optimal block size for my CPU. I have 4 cores, and and currently it seems that the optimal choice is to devide the file (file2.txt) into 4 equal sized chunks, and run a single job on each of the 4 processors. More testing might be needed here. So for the first test case where file2.txt is 20M, I set $block_size to 5M ( see gregory1 solution above), whereas for the more realistic case presented below where file2.txt is 268M, a $block_size of 67M was used.

  2. The solutions BOC1, BOC2, codeforester_orig, inian1, inian4, inian5, and gregory1 all used loose matching. Meaning that the words from file1.txt did not have to match exactly in field #2 of file2.txt. A match anywhere on the line was accepted. Since this behavior made it more difficult to compare them with the other methods, some modified methods were also introduced. The first two methods called BOC1B and BOC2B used a modified regexp1.txt file. The lines in the original regexp1.txt where on the form \|foo1|foo2|...|fooN\| which would match the words at any field boundary. The modified file, regexp1b.txt, anchored the match to field #2 exclusively using the form ^[^|]*\|foo1|foo2|...|fooN\| instead.

    Then the rest of the modified methods codeforester_origB, inian1B, inian4B, inian5B, and gregory1B used a modified file1.txt. Instead of a literal word per line, the modified file file1b.txt used one regex per line on the form:

     ^[^|]*\|word1\| ^[^|]*\|word2\| ^[^|]*\|word3\| [...]

    and in addition, fgrep -f was replaced by grep -E -f for these methods.

Running the tests

Here is the script used for running all the tests. It uses the Bash time command to record the time spent for each script. Note that the time command returns three different times call real, user, and sys. First I used user + sys, but realized that this was incorrect when using Gnu parallel command, so the time reported below is now the real part returned by time. See this question for more information about the different times returned by time.

The first test is run with file1.txt containing 5 lines, and file2.txt containing 1000000 lines. Here is the first 52 lines of the run_all.pl script, the rest of the script is available here.

run_all.pl

#! /usr/bin/env perluse feature qw(say);use strict;use warnings;use Cwd;use Getopt::Long;use Data::Printer;use FGB::Common;use List::Util qw(max shuffle);use Number::Bytes::Human qw(format_bytes);use Sys::Info;GetOptions (    "verbose"       => \my $verbose,    "check"         => \my $check,    "single-case=s" => \my $case,    "expected=i"    => \my $expected_no_lines,) or die("Error in command line arguments\n");my $test_dir    = 'solutions';my $output_file = 'out.txt';my $wc_expected = $expected_no_lines; # expected number of output linesmy $tests       = get_test_names( $test_dir, $case );my $file2_size  = get_file2_size();my $num_cpus    = Sys::Info->new()->device( CPU => () )->count;chdir $test_dir;my $cmd = 'run.sh';my @times;for my $case (@$tests) {    my $savedir = getcwd();    chdir $case;    say "Running '$case'..";    my $arg = get_cmd_args( $case, $file2_size, $num_cpus );    my $output = `bash -c "{ time -p $cmd $arg; } 2>&1"`;    my ($user, $sys, $real ) = get_run_times( $output );    print_timings( $user, $sys, $real ) if $verbose;    check_output_is_ok( $output_file, $wc_expected, $verbose, $check );    print "\n" if $verbose;    push @times, $real;    #push @times, $user + $sys; # this is wrong when using Gnu parallel    chdir $savedir;}say "Done.\n";print_summary( $tests, \@times );

Results

Here is the output from running the tests:

$  run_all.pl --verboseRunning 'inian3'....finished in 0.45 seconds ( user: 0.44, sys: 0.00 )..no of output lines: 66711Running 'inian2'....finished in 0.73 seconds ( user: 0.73, sys: 0.00 )..no of output lines: 66711Running 'Vasiliou'....finished in 0.09 seconds ( user: 0.08, sys: 0.00 )..no of output lines: 66711Running 'codeforester_orig'....finished in 0.05 seconds ( user: 0.05, sys: 0.00 )..no of output lines: 66711Running 'codeforester'....finished in 0.45 seconds ( user: 0.44, sys: 0.01 )..no of output lines: 66711[...]

Summary

[Results obtained by @Vasiliou are shown in the middle column.]

                               |VasiliouMy Benchmark                   |Results  |   Details-------------------------------|---------|----------------------inian4             : 0.04s     |0.22s    | LC_ALL fgrep -f [loose] codeforester_orig  : 0.05s     |         | fgrep -f [loose]Vasiliou2          : 0.06s     |0.16s    | [LC_ALL join [requires sorted files]]BOC1               : 0.06s     |         | grep -E [loose] BOC2               : 0.07s     |15s      | LC_ALL grep -E [loose] BOC2B              : 0.07s     |         | LC_ALL grep -E [strict] inian4B            : 0.08s     |         | LC_ALL grep -E -f [strict] Vasiliou           : 0.08s     |0.23s    | [join [requires sorted files]] gregory1B          : 0.08s     |         | [parallel + grep -E -f [strict]] ikegami            : 0.1s      |         | grep -P gregory1           : 0.11s     |0.5s     | [parallel + fgrep -f [loose]] hakon1             : 0.14s     |         | [perl + c]BOC1B              : 0.14s     |         | grep -E [strict] jjoao              : 0.21s     |         | [compiled flex generated c code] inian6             : 0.26s     |0.7s     | [LC_ALL awk + split+dict] codeforester_origB : 0.28s     |         | grep -E -f [strict] dawg               : 0.35s     |         | [python + split+dict] inian3             : 0.44s     |1.1s     | [awk + split+dict] zdim2              : 0.4s      |         | [perl + split+dict] codeforester       : 0.45s     |         | [perl + split+dict] oliv               : 0.5s      |         | [python + compiled regex + re.search()] zdim               : 0.61s     |         | [perl + regexp+dict] inian2             : 0.73s     |1.7s     | [awk + index($0,i)] inian5             : 18.12s    |         | [LC_ALL awk + match($0,i) [loose]] inian1             : 19.46s    |         | [awk + match($0,i) [loose]] inian5B            : 42.27s    |         | [LC_ALL awk + match($0,i) [strict]] inian1B            : 85.67s    |         | [awk + match($0,i) [strict]] Vasiliou Results : 2 X CPU Intel 2 Duo T6570 @ 2.10GHz - 2Gb RAM-Debian Testing 64bit- kernel 4.9.0.1 - no cpu freq scaling.

A more realistic test case

I then created a more realistic case with file1.txt having 100 words and file2.txt having 10 million lines (268Mb file size). I extracted 1000 random words from the dictionary at /usr/share/dict/american-english using shuf -n1000 /usr/share/dict/american-english > words.txt then extracted 100 of these words into file1.txt and then constructed file2.txt the same way as described above for the first test case. Note that the dictionary file was UTF-8 encoded, and I stripped away all non-ASCII characters from the words.txt.

Then I run the test without the three slowest methods from the previous case. I.e. inian1, inian2, and inian5 were left out. Here are the new results:

gregory1           : 0.86s     | [parallel + fgrep -f [loose]]Vasiliou2          : 0.94s     | [LC_ALL join [requires sorted files]]inian4B            : 1.12s     | LC_ALL grep -E -f [strict] BOC2B              : 1.13s     | LC_ALL grep -E [strict] BOC2               : 1.15s     | LC_ALL grep -E [loose] BOC1               : 1.18s     | grep -E [loose] ikegami            : 1.33s     | grep -P Vasiliou           : 1.37s     | [join [requires sorted files]]hakon1             : 1.44s     | [perl + c]inian4             : 2.18s     | LC_ALL fgrep -f [loose] codeforester_orig  : 2.2s      | fgrep -f [loose] inian6             : 2.82s     | [LC_ALL awk + split+dict] jjoao              : 3.09s     | [compiled flex generated c code] dawg               : 3.54s     | [python + split+dict] zdim2              : 4.21s     | [perl + split+dict]codeforester       : 4.67s     | [perl + split+dict] inian3             : 5.52s     | [awk + split+dict] zdim               : 6.55s     | [perl + regexp+dict] gregory1B          : 45.36s    | [parallel + grep -E -f [strict]] oliv               : 60.35s    | [python + compiled regex + re.search()] BOC1B              : 74.71s    | grep -E [strict] codeforester_origB : 75.52s    | grep -E -f [strict] 

Note

The grep based solutions were looking for a match on the whole line, so in this case they contained some false matches: the methods codeforester_orig, BOC1, BOC2, gregory1, inian4, and oliv extracted 1,087,609 lines out of 10,000,000 lines, whereas the other methods extracted the correct 997,993 lines from file2.txt.

Notes

  • I tested this on my Ubuntu 16.10 laptop (Intel Core i7-7500U CPU @ 2.70GHz)

  • The whole benchmark study is available here.


Did you try Awk that could speed up things a bit:

awk 'FNR==NR{hash[$1]; next}{for (i in hash) if (match($0,i)) {print; break}}' file1.txt FS='|' file2.txt

(or) using index() function in Awk as suggested by comments from Benjamin W., below

awk 'FNR==NR{hash[$1]; next}{for (i in hash) if (index($0,i)) {print; break}}' file1.txt FS='|' file2.txt

(or) a more direct regex match as suggested by Ed Morton in comments,

awk 'FNR==NR{hash[$1]; next}{for (i in hash) if ($0~i) {print; break}}' file1.txt FS='|' file2.txt

is all you need. I'm guessing this will be faster but not exactly sure on files with million+ entries. Here the problem is with the possibility match in anywhere along the line. Had the same been in any particular column (e.g. say $2 alone), a faster approach could be

awk 'FNR==NR{hash[$1]; next}$2 in hash' file1.txt FS='|' file2.txt

Also you could speed-things up by playing with the locale set in your system. Paraphrasing from this wonderful Stéphane Chazelas's answer on the subject, you could speed up things pretty quickly by setting passing the locale LC_ALL=C to the command locally being run.

On any GNU based system, the defaults for the locale

$ localeLANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL=

With one variable LC_ALL, you can set all LC_ type variables at once to a specified locale

$ LC_ALL=C localeLANG=en_US.UTF-8LC_CTYPE="C"LC_NUMERIC="C"LC_TIME="C"LC_COLLATE="C"LC_MONETARY="C"LC_MESSAGES="C"LC_PAPER="C"LC_NAME="C"LC_ADDRESS="C"LC_TELEPHONE="C"LC_MEASUREMENT="C"LC_IDENTIFICATION="C"       LC_ALL=C

So what does this impact?

Simply put, when using the locale C it will default to the server's base Unix/Linux language of ASCII. Basically when you grep something, by default your locale is going to be internationalized and set to UTF-8, which can represent every character in the Unicode character set to help display any of the world's writing systems, currently over more than 110,000 unique characters, whereas with ASCII each character is encoded in a single byte sequence and its character set comprises of no longer than 128 unique characters.

So it translates to this, when using grep on a file encoded in UTF-8 character-set, it needs to match each character with any of the hundred thousand unique characters, but just 128 in ASCII, so use your fgrep as

LC_ALL=C fgrep -F -f file1.txt file2.txt

Also, the same can be adapted to the Awk, since it uses a regex match with the match($0,i) call, setting the C locale could speed up the string match.

LC_ALL=C awk 'FNR==NR{hash[$1]; next}{for (i in hash) if (match($0,i)) {print; break}}' file1.txt FS='|' file2.txt