Why is Arrays.binarySearch not improving the performance compared to walking the array? Why is Arrays.binarySearch not improving the performance compared to walking the array? arrays arrays

Why is Arrays.binarySearch not improving the performance compared to walking the array?


Your timing includes the retrieval of data from your hard drive. This could be taking the majority of your runtime. Omit the data load from your timing to get a more accurate comparison of your two approaches. Imagine if it takes up 18 seconds and you're comparing 18.644 vs 18.789 (0.77% improvement) instead of 0.644 vs 0.789 (18.38% improvement).

If you have a linear operation O(n), such as loading a binary structure, and you combine it with a binary search O(log n), you end up with O(n). If you trust Big O notation, then you should expect O(n + log n) to not be significantly different from O(2 * n) as they both reduce to O(n).

Also, a binary search may perform better or worse than a linear search depending on the density of houses between towers. Consider, say 1024 homes with a tower evenly dispersed every 4 homes. A linear search will step 4 times per tower, while a binary search will take log2(1024)=10 steps per tower.

One more thing... your minNumOfTransmitters method is sorting the already-sorted array passed into it from test_01 and test_02. That resorting step takes longer than your searches themselves, which further obscures the timing differences between your two search algorithms.

======

I created a small timing class to give a better picture of what's happening. I've removed the line of code from minNumOfTransmitters to prevent it from rerunning the sort, and added a boolean param to select whether to use your binary version. It totals the sum of times for 400 iterations, separating out each step. The results on my system illustrate that the load time dwarfs the sort time, which in turn dwarfs the solve time.

  Load:  22.565s  Sort:   4.518sLinear:   0.012sBinary:   0.003s

It's easy to see how optimizing that last step doesn't make much difference in overall runtime.

private static class Timing {    public long load=0;    public long sort=0;    public long solve1=0;    public long solve2=0;    private String secs(long millis) {        return String.format("%3d.%03ds", millis/1000, millis%1000);    }    public String toString() {        return "  Load: " + secs(load) + "\n  Sort: " + secs(sort) + "\nLinear: " + secs(solve1) + "\nBinary: " + secs(solve2);    }    public void add(Timing timing) {        load+=timing.load;        sort+=timing.sort;        solve1+=timing.solve1;        solve2+=timing.solve2;    }}static Timing test_01() throws FileNotFoundException {    Timing timing=new Timing();    long start = System.currentTimeMillis();    final File file = new File("c:\\path\\to\\xnpwdiG3.txt");    final Scanner scanner = new Scanner(file);    int[] houseLocations = new int[73382];    for (int counter = 0; counter < 73382; counter++) {        houseLocations[counter] = scanner.nextInt();    }    timing.load+=System.currentTimeMillis()-start;    start=System.currentTimeMillis();    final int[] uniqueHouseLocationsSorted = HackerlandRadioTransmitters.uniqueHouseLocationsSorted(houseLocations);    timing.sort=System.currentTimeMillis()-start;    start=System.currentTimeMillis();    final int minNumOfTransmitters = HackerlandRadioTransmitters.minNumOfTransmitters(uniqueHouseLocationsSorted, 73381, false);    timing.solve1=System.currentTimeMillis()-start;    start=System.currentTimeMillis();    final int minNumOfTransmittersBin = HackerlandRadioTransmitters.minNumOfTransmitters(uniqueHouseLocationsSorted, 73381, true);    timing.solve2=System.currentTimeMillis()-start;    final long end = System.currentTimeMillis();    return timing;}


In your time measurement you include operations that are much slower than array search. Namely filesystem I/O and array sorting.I/O in general (reading/writing from filesystem, network communication) is by orders of magnitude slower than operations that involve only CPU and RAM access.

Let's rewrite your test in a way that does not read the file on every loop iteration:

static void test_02() throws FileNotFoundException {        final File file = new File("input.txt");        final Scanner scanner = new Scanner(file);        int[] houseLocations = new int[73382];        for (int counter = 0; counter < 73382; counter++) {            houseLocations[counter] = scanner.nextInt();        }        scanner.close();        final int rounds = 400;        final int[] uniqueHouseLocationsSorted = uniqueHouseLocationsSorted(houseLocations);        final int transmitterRange = 73381;        final long start = System.currentTimeMillis();        for (int i = 0; i < rounds; i++) {            final int minNumOfTransmitters = minNumOfTransmitters(uniqueHouseLocationsSorted, transmitterRange);        }        final long end = System.currentTimeMillis();        System.out.println("Took: " + (end - start) + " milliseconds..");}

Notice in this version of the test the file is read only once and time measuring starts after that.With the above, I get Took: 1700 milliseconds.. (more or less a few millis) for both the iterative version and the binary search. So we still can't see that binary search is faster. That's because almost all of that time goes into sorting the array 400 times.

Now let's remove the line that sorts the input array from the minNumOfTransmitters method. We sort the array (once) anyway at the beginning of the test.

Now we can see that things are much faster. After removing the line houseLocations = uniqueHouseLocationsSorted(houseLocations) from minNumOfTransmitters I get: Took: 68 milliseconds.. for the iterative version. Clearly, since this duration is already very small, we will not see a significant difference with the binary search version.

So let's increase the number of loop rounds to: 100000.
Now I get Took: 2121 milliseconds.. for the iterative version and Took: 36 milliseconds.. for the binary search version.

Because we now isolated what we measure and focus on the array searches, rather than including operations that are much slower, we can notice the big difference in performance (for the better) of binary search.

If you want to see how many times binary search enters its while loop, you can implement it yourself and add a counter:

private static int binarySearch0(int[] a, int fromIndex, int toIndex, int key) {        int low = fromIndex;        int high = toIndex - 1;        int loop = 0;        while (low <= high) {            loop++;            int mid = (low + high) >>> 1;            int midVal = a[mid];            if (midVal < key) {                low = mid + 1;            } else if (midVal > key) {                high = mid - 1;            } else {                return mid; // key found            }        }        System.out.println("binary search looped " + loop + " times");        return -(low + 1);  // key not found.}

The method is copied from the Arrays class in the JDK - I just added the loop counter and the println.
When the length of the array to search is 73382, the loop enters only 16 times.That is exactly what we expect: log(73382) =~ 16.


I agree with other answers that the main issue with your tests is that they measure wrong things: IO and sorting. But I don't think suggested tests are good. My suggestion is following:

static void test_02() throws FileNotFoundException {    final File file = new File("43620487.txt");    final Scanner scanner = new Scanner(file);    int[] houseLocations = new int[73382];    for (int counter = 0; counter < 73382; counter++) {        houseLocations[counter] = scanner.nextInt();    }    final int[] uniqueHouseLocationsSorted = uniqueHouseLocationsSorted(houseLocations);    final Random random = new Random(0); // fixed seed to have the same sequences in all tests    long sum = 0;    // warm up    for (int i = 0; i < 100; i++) {        final int transmitterRange = random.nextInt(70000) + 1;        final int minNumOfTransmitters = minNumOfTransmitters(uniqueHouseLocationsSorted, transmitterRange);        sum += minNumOfTransmitters;    }    // actual measure    final long start = System.currentTimeMillis();    for (int i = 0; i < 4000; i++) {        final int transmitterRange = random.nextInt(70000) + 1;        final int minNumOfTransmitters = minNumOfTransmitters(uniqueHouseLocationsSorted, transmitterRange);        sum += minNumOfTransmitters;    }    final long end = System.currentTimeMillis();    System.out.println("Took: " + (end - start) + " milliseconds. Sum = " + sum);}

Note also that I remove all System.out.println calls from findNextTowerIndex and nextHouseNotCoveredIndex and uniqueHouseLocationsSorted call from minNumOfTransmitters as they affect performance testing as well.

So what I think is important here:

  1. Move all I/O and sorting out of the measurement loop
  2. Perform some warm up outside of measurement
  3. Use the same random sequence for all measurements
  4. Don't dispose result of the calculation so JIT can't optimize that call out altogether

With such test I see about 10 times difference on my machine: around 80ms vs around 8ms.

And if you really want to do performance tests in Java you should consider using JMH aka Java Microbenchmark Harness