In C, why is "signed int" faster than "unsigned int"? In C, why is "signed int" faster than "unsigned int"? c c

In C, why is "signed int" faster than "unsigned int"?


Your question is genuinely intriguing as the unsigned version consistently produces code that is 10 to 20% slower. Yet there are multiple problems in the code:

  • Both functions return 0 for 2, 3, 5 and 7, which is incorrect.
  • The test if (i != num) return 0; else return 1; is completely useless as the loop body is only run for i < num. Such a test would be useful for the small prime tests but special casing them is not really useful.
  • the casts in the unsigned version are redundant.
  • benchmarking code that produces textual output to the terminal is unreliable, you should use the clock() function to time CPU intensive functions without any intervening I/O.
  • the algorithm for prime testing is utterly inefficient as the loop runs num / 2 times instead of sqrt(num).

Let's simplify the code and run some precise benchmarks:

#include <stdio.h>#include <time.h>int isprime_slow(int num) {    if (num % 2 == 0)        return num == 2;    for (int i = 3; i < num; i += 2) {        if (num % i == 0)            return 0;    }    return 1;}int unsigned_isprime_slow(unsigned int num) {    if (num % 2 == 0)        return num == 2;    for (unsigned int i = 3; i < num; i += 2) {        if (num % i == 0)            return 0;    }    return 1;}int isprime_fast(int num) {    if (num % 2 == 0)        return num == 2;    for (int i = 3; i * i <= num; i += 2) {        if (num % i == 0)            return 0;    }    return 1;}int unsigned_isprime_fast(unsigned int num) {    if (num % 2 == 0)        return num == 2;    for (unsigned int i = 3; i * i <= num; i += 2) {        if (num % i == 0)            return 0;    }    return 1;}int main(void) {    int a[] = {        294967291, 0, 294367293, 0, 294967293, 0, 294967241, 1, 294967251, 0,        294965291, 0, 294966291, 0, 294963293, 0, 294927293, 1, 294961293, 0,        294917293, 0, 294167293, 0, 294267293, 0, 294367293, 0, 294467293, 0,    };    struct testcase { int (*fun)(); const char *name; int t; } test[] = {        { isprime_slow, "isprime_slow", 0 },        { unsigned_isprime_slow, "unsigned_isprime_slow", 0 },        { isprime_fast, "isprime_fast", 0 },        { unsigned_isprime_fast, "unsigned_isprime_fast", 0 },    };    for (int n = 0; n < 4; n++) {        clock_t t = clock();        for (int i = 0; i < 30; i += 2) {            if (test[n].fun(a[i]) != a[i + 1]) {                printf("%s(%d) != %d\n", test[n].name, a[i], a[i + 1]);            }        }        test[n].t = clock() - t;    }    for (int n = 0; n < 4; n++) {        printf("%21s: %4d.%03dms\n", test[n].name, test[n].t / 1000), test[n].t % 1000);    }    return 0;}

The code compiled with clang -O2 on OS/X produces this output:

         isprime_slow:  788.004msunsigned_isprime_slow:  965.381ms         isprime_fast:    0.065msunsigned_isprime_fast:    0.089ms

These timings are consistent with the OP's observed behavior on a different system, but show the dramatic improvement caused by the more efficient iteration test: 10000 times faster!

Regarding the question Why is the function slower with unsigned?, let's look at the generated code (gcc 7.2 -O2):

isprime_slow(int):        ....L5:        movl    %edi, %eax        cltd        idivl   %ecx        testl   %edx, %edx        je      .L1.L4:        addl    $2, %ecx        cmpl    %esi, %ecx        jne     .L5.L6:        movl    $1, %edx.L1:        movl    %edx, %eax        retunsigned_isprime_slow(unsigned int):        ....L19:        xorl    %edx, %edx        movl    %edi, %eax        divl    %ecx        testl   %edx, %edx        je      .L22.L18:        addl    $2, %ecx        cmpl    %esi, %ecx        jne     .L19.L20:        movl    $1, %eax        ret       ....L22:        xorl    %eax, %eax        ret

The inner loops are very similar, same number of instructions, similar instructions. Here are however some potential explanations:

  • cltd extends the sign of the eax register into the edx register, which may be causing an instruction delay because eax is modified by the immediately preceeding instruction movl %edi, %eax. Yet this would make the signed version slower than the unsigned one, not faster.
  • the loops' initial instructions might be misaligned for the unsigned version, but it is unlikely as changing the order in the source code has no effect on the timings.
  • Although the register contents are identical for the signed and unsigned division opcodes, it is possible that the idivl instruction take fewer cycles than the divl instruction. Indeed the signed division operates on one less bit of precision than the unsigned division, but the difference seems quite large for this small change.
  • I suspect more effort was put into the silicon implementation of idivl because signed divisions are more common that unsigned divisions (as measured by years of coding statistics at Intel).
  • as commented by rcgldr, looking at instruction tables for Intel process, for Ivy Bridge, DIV 32 bit takes 10 micro ops, 19 to 27 cycles, IDIV 9 micro ops, 19 to 26 cycles. The benchmark times are consistent with these timings. The extra micro-op may be due to the longer operands in DIV (64/32 bits) as opposed to IDIV (63/31 bits).

This surprising result should teach us a few lessons:

  • optimizing is a difficult art, be humble and procrastinate.
  • correctness is often broken by optimizations.
  • choosing a better algorithm beats optimization by a long shot.
  • always benchmark code, do not trust your instincts.


Because signed integer overflow is undefined, the compiler can make a lot of assumptions and optimizations on code involving signed integers. Unsigned integer overflow is defined to wrap around, so the compiler won't be able to optimize as much. See also http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html#signed_overflow and http://www.airs.com/blog/archives/120.


From Instruction specification on AMD/Intel we have (for K7):

Instruction Ops Latency ThroughputDIV r32/m32 32  24      23IDIV r32    81  41      41IDIV m32    89  41      41 

For i7, latency and throughput are the same for IDIVL and DIVL, a slight difference exists for the µops.

This may explain the difference as -O3 assembly codes only differ by signedness (DIVL vs IDIVL) on my machine.