Swift Beta performance: sorting arrays Swift Beta performance: sorting arrays swift swift

Swift Beta performance: sorting arrays


tl;dr Swift 1.0 is now as fast as C by this benchmark using the default release optimisation level [-O].


Here is an in-place quicksort in Swift Beta:

func quicksort_swift(inout a:CInt[], start:Int, end:Int) {    if (end - start < 2){        return    }    var p = a[start + (end - start)/2]    var l = start    var r = end - 1    while (l <= r){        if (a[l] < p){            l += 1            continue        }        if (a[r] > p){            r -= 1            continue        }        var t = a[l]        a[l] = a[r]        a[r] = t        l += 1        r -= 1    }    quicksort_swift(&a, start, r + 1)    quicksort_swift(&a, r + 1, end)}

And the same in C:

void quicksort_c(int *a, int n) {    if (n < 2)        return;    int p = a[n / 2];    int *l = a;    int *r = a + n - 1;    while (l <= r) {        if (*l < p) {            l++;            continue;        }        if (*r > p) {            r--;            continue;        }        int t = *l;        *l++ = *r;        *r-- = t;    }    quicksort_c(a, r - a + 1);    quicksort_c(l, a + n - l);}

Both work:

var a_swift:CInt[] = [0,5,2,8,1234,-1,2]var a_c:CInt[] = [0,5,2,8,1234,-1,2]quicksort_swift(&a_swift, 0, a_swift.count)quicksort_c(&a_c, CInt(a_c.count))// [-1, 0, 2, 2, 5, 8, 1234]// [-1, 0, 2, 2, 5, 8, 1234]

Both are called in the same program as written.

var x_swift = CInt[](count: n, repeatedValue: 0)var x_c = CInt[](count: n, repeatedValue: 0)for var i = 0; i < n; ++i {    x_swift[i] = CInt(random())    x_c[i] = CInt(random())}let swift_start:UInt64 = mach_absolute_time();quicksort_swift(&x_swift, 0, x_swift.count)let swift_stop:UInt64 = mach_absolute_time();let c_start:UInt64 = mach_absolute_time();quicksort_c(&x_c, CInt(x_c.count))let c_stop:UInt64 = mach_absolute_time();

This converts the absolute times to seconds:

static const uint64_t NANOS_PER_USEC = 1000ULL;static const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC;static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC;mach_timebase_info_data_t timebase_info;uint64_t abs_to_nanos(uint64_t abs) {    if ( timebase_info.denom == 0 ) {        (void)mach_timebase_info(&timebase_info);    }    return abs * timebase_info.numer  / timebase_info.denom;}double abs_to_seconds(uint64_t abs) {    return abs_to_nanos(abs) / (double)NANOS_PER_SEC;}

Here is a summary of the compiler's optimazation levels:

[-Onone] no optimizations, the default for debug.[-O]     perform optimizations, the default for release.[-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks.

Time in seconds with [-Onone] for n=10_000:

Swift:            0.895296452C:                0.001223848

Here is Swift's builtin sort() for n=10_000:

Swift_builtin:    0.77865783

Here is [-O] for n=10_000:

Swift:            0.045478346C:                0.000784666Swift_builtin:    0.032513488

As you can see, Swift's performance improved by a factor of 20.

As per mweathers' answer, setting [-Ofast] makes the real difference, resulting in these times for n=10_000:

Swift:            0.000706745C:                0.000742374Swift_builtin:    0.000603576

And for n=1_000_000:

Swift:            0.107111846C:                0.114957179Swift_sort:       0.092688548

For comparison, this is with [-Onone] for n=1_000_000:

Swift:            142.659763258C:                0.162065333Swift_sort:       114.095478272

So Swift with no optimizations was almost 1000x slower than C in this benchmark, at this stage in its development. On the other hand with both compilers set to [-Ofast] Swift actually performed at least as well if not slightly better than C.

It has been pointed out that [-Ofast] changes the semantics of the language, making it potentially unsafe. This is what Apple states in the Xcode 5.0 release notes:

A new optimization level -Ofast, available in LLVM, enables aggressive optimizations. -Ofast relaxes some conservative restrictions, mostly for floating-point operations, that are safe for most code. It can yield significant high-performance wins from the compiler.

They all but advocate it. Whether that's wise or not I couldn't say, but from what I can tell it seems reasonable enough to use [-Ofast] in a release if you're not doing high-precision floating point arithmetic and you're confident no integer or array overflows are possible in your program. If you do need high performance and overflow checks / precise arithmetic then choose another language for now.

BETA 3 UPDATE:

n=10_000 with [-O]:

Swift:            0.019697268C:                0.000718064Swift_sort:       0.002094721

Swift in general is a bit faster and it looks like Swift's built-in sort has changed quite significantly.

FINAL UPDATE:

[-Onone]:

Swift:   0.678056695C:       0.000973914

[-O]:

Swift:   0.001158492C:       0.001192406

[-Ounchecked]:

Swift:   0.000827764C:       0.001078914


TL;DR: Yes, the only Swift language implementation is slow, right now. If you need fast, numeric (and other types of code, presumably) code, just go with another one. In the future, you should re-evaluate your choice. It might be good enough for most application code that is written at a higher level, though.

From what I'm seeing in SIL and LLVM IR, it seems like they need a bunch of optimizations for removing retains and releases, which might be implemented in Clang (for Objective-C), but they haven't ported them yet. That's the theory I'm going with (for now… I still need to confirm that Clang does something about it), since a profiler run on the last test-case of this question yields this “pretty” result:

Time profiling on -O3Time profiling on -Ofast

As was said by many others, -Ofast is totally unsafe and changes language semantics. For me, it's at the “If you're going to use that, just use another language” stage. I'll re-evaluate that choice later, if it changes.

-O3 gets us a bunch of swift_retain and swift_release calls that, honestly, don't look like they should be there for this example. The optimizer should have elided (most of) them AFAICT, since it knows most of the information about the array, and knows that it has (at least) a strong reference to it.

It shouldn't emit more retains when it's not even calling functions which might release the objects. I don't think an array constructor can return an array which is smaller than what was asked for, which means that a lot of checks that were emitted are useless. It also knows that the integer will never be above 10k, so the overflow checks can be optimized (not because of -Ofast weirdness, but because of the semantics of the language (nothing else is changing that var nor can access it, and adding up to 10k is safe for the type Int).

The compiler might not be able to unbox the array or the array elements, though, since they're getting passed to sort(), which is an external function and has to get the arguments it's expecting. This will make us have to use the Int values indirectly, which would make it go a bit slower. This could change if the sort() generic function (not in the multi-method way) was available to the compiler and got inlined.

This is a very new (publicly) language, and it is going through what I assume are lots of changes, since there are people (heavily) involved with the Swift language asking for feedback and they all say the language isn't finished and will change.

Code used:

import Cocoalet swift_start = NSDate.timeIntervalSinceReferenceDate();let n: Int = 10000let x = Int[](count: n, repeatedValue: 1)for i in 0..n {    for j in 0..n {        let tmp: Int = x[j]        x[i] = tmp    }}let y: Int[] = sort(x)let swift_stop = NSDate.timeIntervalSinceReferenceDate();println("\(swift_stop - swift_start)s")

P.S: I'm not an expert on Objective-C nor all the facilities from Cocoa, Objective-C, or the Swift runtimes. I might also be assuming some things that I didn't write.


I decided to take a look at this for fun, and here are the timings that I get:

Swift 4.0.2           :   0.83s (0.74s with `-Ounchecked`)C++ (Apple LLVM 8.0.0):   0.74s

Swift

// Swift 4.0 codeimport Foundationfunc doTest() -> Void {    let arraySize = 10000000    var randomNumbers = [UInt32]()    for _ in 0..<arraySize {        randomNumbers.append(arc4random_uniform(UInt32(arraySize)))    }    let start = Date()    randomNumbers.sort()    let end = Date()    print(randomNumbers[0])    print("Elapsed time: \(end.timeIntervalSince(start))")}doTest()

Results:

Swift 1.1

xcrun swiftc --versionSwift version 1.1 (swift-600.0.54.20)Target: x86_64-apple-darwin14.0.0xcrun swiftc -O SwiftSort.swift./SwiftSort     Elapsed time: 1.02204304933548

Swift 1.2

xcrun swiftc --versionApple Swift version 1.2 (swiftlang-602.0.49.6 clang-602.0.49)Target: x86_64-apple-darwin14.3.0xcrun -sdk macosx swiftc -O SwiftSort.swift./SwiftSort     Elapsed time: 0.738763988018036

Swift 2.0

xcrun swiftc --versionApple Swift version 2.0 (swiftlang-700.0.59 clang-700.0.72)Target: x86_64-apple-darwin15.0.0xcrun -sdk macosx swiftc -O SwiftSort.swift./SwiftSort     Elapsed time: 0.767306983470917

It seems to be the same performance if I compile with -Ounchecked.

Swift 3.0

xcrun swiftc --versionApple Swift version 3.0 (swiftlang-800.0.46.2 clang-800.0.38)Target: x86_64-apple-macosx10.9xcrun -sdk macosx swiftc -O SwiftSort.swift./SwiftSort     Elapsed time: 0.939633965492249xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift./SwiftSort     Elapsed time: 0.866258025169373

There seems to have been a performance regression from Swift 2.0 to Swift 3.0, and I'm also seeing a difference between -O and -Ounchecked for the first time.

Swift 4.0

xcrun swiftc --versionApple Swift version 4.0.2 (swiftlang-900.0.69.2 clang-900.0.38)Target: x86_64-apple-macosx10.9xcrun -sdk macosx swiftc -O SwiftSort.swift./SwiftSort     Elapsed time: 0.834299981594086xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift./SwiftSort     Elapsed time: 0.742045998573303

Swift 4 improves the performance again, while maintaining a gap between -O and -Ounchecked. -O -whole-module-optimization did not appear to make a difference.

C++

#include <chrono>#include <iostream>#include <vector>#include <cstdint>#include <stdlib.h>using namespace std;using namespace std::chrono;int main(int argc, const char * argv[]) {    const auto arraySize = 10000000;    vector<uint32_t> randomNumbers;    for (int i = 0; i < arraySize; ++i) {        randomNumbers.emplace_back(arc4random_uniform(arraySize));    }    const auto start = high_resolution_clock::now();    sort(begin(randomNumbers), end(randomNumbers));    const auto end = high_resolution_clock::now();    cout << randomNumbers[0] << "\n";    cout << "Elapsed time: " << duration_cast<duration<double>>(end - start).count() << "\n";    return 0;}

Results:

Apple Clang 6.0

clang++ --versionApple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)Target: x86_64-apple-darwin14.0.0Thread model: posixclang++ -O3 -std=c++11 CppSort.cpp -o CppSort./CppSort     Elapsed time: 0.688969

Apple Clang 6.1.0

clang++ --versionApple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)Target: x86_64-apple-darwin14.3.0Thread model: posixclang++ -O3 -std=c++11 CppSort.cpp -o CppSort./CppSort     Elapsed time: 0.670652

Apple Clang 7.0.0

clang++ --versionApple LLVM version 7.0.0 (clang-700.0.72)Target: x86_64-apple-darwin15.0.0Thread model: posixclang++ -O3 -std=c++11 CppSort.cpp -o CppSort./CppSort     Elapsed time: 0.690152

Apple Clang 8.0.0

clang++ --versionApple LLVM version 8.0.0 (clang-800.0.38)Target: x86_64-apple-darwin15.6.0Thread model: posixclang++ -O3 -std=c++11 CppSort.cpp -o CppSort./CppSort     Elapsed time: 0.68253

Apple Clang 9.0.0

clang++ --versionApple LLVM version 9.0.0 (clang-900.0.38)Target: x86_64-apple-darwin16.7.0Thread model: posixclang++ -O3 -std=c++11 CppSort.cpp -o CppSort./CppSort     Elapsed time: 0.736784

Verdict

As of the time of this writing, Swift's sort is fast, but not yet as fast as C++'s sort when compiled with -O, with the above compilers & libraries. With -Ounchecked, it appears to be as fast as C++ in Swift 4.0.2 and Apple LLVM 9.0.0.