Difference between uint8_t, uint_fast8_t and uint_least8_t Difference between uint8_t, uint_fast8_t and uint_least8_t c c

Difference between uint8_t, uint_fast8_t and uint_least8_t


uint_least8_t is the smallest type that has at least 8 bits.uint_fast8_t is the fastest type that has at least 8 bits.

You can see the differences by imagining exotic architectures. Imagine a 20-bit architecture. Its unsigned int has 20 bits (one register), and its unsigned char has 10 bits. So sizeof(int) == 2, but using char types requires extra instructions to cut the registers in half. Then:

  • uint8_t: is undefined (no 8 bit type).
  • uint_least8_t: is unsigned char, the smallest type that is at least 8 bits.
  • uint_fast8_t: is unsigned int, because in my imaginary architecture, a half-register variable is slower than a full-register one.


uint8_t means: give me an unsigned int of exactly 8 bits.

uint_least8_t means: give me the smallest type of unsigned int which has at least 8 bits. Optimize for memory consumption.

uint_fast8_t means: give me an unsigned int of at least 8 bits. Pick a larger type if it will make my program faster, because of alignment considerations. Optimize for speed.

Also, unlike the plain int types, the signed version of the above stdint.h types are guaranteed to be 2's complement format.


The theory goes something like:

uint8_t is required to be exactly 8 bits but it's not required to exist. So you should use it where you are relying on the modulo-256 assignment behaviour* of an 8 bit integer and where you would prefer a compile failure to misbehaviour on obscure architectures.

uint_least8_t is required to be the smallest available unsigned integer type that can store at least 8 bits. You would use it when you want to minimise the memory use of things like large arrays.

uint_fast8_t is supposed to be the "fastest" unsigned type that can store at least 8 bits; however, it's not actually guaranteed to be the fastest for any given operation on any given processor. You would use it in processing code that performs lots of operations on the value.

The practice is that the "fast" and "least" types aren't used much.

The "least" types are only really useful if you care about portability to obscure architectures with CHAR_BIT != 8 which most people don't.

The problem with the "fast" types is that "fastest" is hard to pin down. A smaller type may mean less load on the memory/cache system but using a type that is smaller than native may require extra instructions. Furthermore which is best may change between architecture versions but implementers often want to avoid breaking ABI in such cases.

From looking at some popular implementations it seems that the definitions of uint_fastn_t are fairly arbitrary. glibc seems to define them as being at least the "native word size" of the system in question taking no account of the fact that many modern processors (especially 64-bit ones) have specific support for fast operations on items smaller than their native word size. IOS apparently defines them as equivalent to the fixed-size types. Other platforms may vary.

All in all if performance of tight code with tiny integers is your goal you should be bench-marking your code on the platforms you care about with different sized types to see what works best.

* Note that unfortunately modulo-256 assignment behaviour does not always imply modulo-256 arithmetic, thanks to C's integer promotion misfeature.