performance of unsigned vs signed integers performance of unsigned vs signed integers c c

performance of unsigned vs signed integers


Division by powers of 2 is faster with unsigned int, because it can be optimized into a single shift instruction. With signed int, it usually requires more machine instructions, because division rounds towards zero, but shifting to the right rounds down. Example:

int foo(int x, unsigned y){    x /= 8;    y /= 8;    return x + y;}

Here is the relevant x part (signed division):

movl 8(%ebp), %eaxleal 7(%eax), %edxtestl %eax, %eaxcmovs %edx, %eaxsarl $3, %eax

And here is the relevant y part (unsigned division):

movl 12(%ebp), %edxshrl $3, %edx


In C++ (and C), signed integer overflow is undefined, whereas unsigned integer overflow is defined to wrap around. Notice that e.g. in gcc, you can use the -fwrapv flag to make signed overflow defined (to wrap around).

Undefined signed integer overflow allows the compiler to assume that overflows don't happen, which may introduce optimization opportunities. See e.g. this blog post for discussion.


unsigned leads to the same or better performance than signed.Some examples:

  • Division by a constant which is a power of 2 (see also the answer from FredOverflow)
  • Division by a constant number (for example, my compiler implements division by 13 using 2 asm instructions for unsigned, and 6 instructions for signed)
  • Checking whether a number is even (i have no idea why my MS Visual Studio compiler implements it with 4 instructions for signed numbers; gcc does it with 1 instruction, just like in the unsigned case)

short usually leads to the same or worse performance than int (assuming sizeof(short) < sizeof(int)). Performance degradation happens when you assign a result of an arithmetic operation (which is usually int, never short) to a variable of type short, which is stored in the processor's register (which is also of type int). All the conversions from short to int take time and are annoying.

Note: some DSPs have fast multiplication instructions for the signed short type; in this specific case short is faster than int.

As for the difference between int and long, i can only guess (i am not familiar with 64-bit architectures). Of course, if int and long have the same size (on 32-bit platforms), their performance is also the same.


A very important addition, pointed out by several people:

What really matters for most applications is the memory footprint and utilized bandwidth. You should use the smallest necessary integers (short, maybe even signed/unsigned char) for large arrays.

This will give better performance, but the gain is nonlinear (i.e. not by a factor of 2 or 4) and somewhat unpredictable - it depends on cache size and the relationship between calculations and memory transfers in your application.