Regarding integer memory allocation Regarding integer memory allocation unix unix

Regarding integer memory allocation


If you are using fixed-size integer types (like int8_t or int16_t) then whether you target a 32- or 64-bit platform doesn't matter much.

One of the things that does matter is the size of pointers. All pointers are 32 bits when targeting a 32-bit architecture, and 64 bits when targeting a 64-bit architecture.

It used to be rather common to store pointer values in an int, though this practice has become very discouraged for portability reasons, and the 32/64-bit case is a great example. If you store a pointer in an int, then your code will invoke undefined behavior on 64-bit architectures, as you truncated a pointer. When you would go to extract the pointer, you'd dereference it likely crash, or (worse) proceed with invalid data.


There are several reasons why you have to compile different executables between 32-bit and 64-bit machines - the size of an int might not be a factor, or it might, since the C standard only defines minimums and relative sizes - there is no maximum size of an int so far as I know (provided it is not longer than a long).

The size of a pointer is a major difference. The compiler and linker produce a different layout of executable file between 32 and 64 bit process address spaces. The runtime libraries are different, and dynamically linked libaries (shared objects on UNIX) have to share the same size pointers, otherwise they cannot interact with the rest of the process.

Why use 64-bit? What is the advantage of 64-bit over 32-bit? The main advantage is the maximum size of a pointer, and hence process-address space. On 32-bit this is 4GB, on 64-bit it is 16EB (about 16,000 terabytes).


If you have a look at this page, you can see that the basic types in C have certain guaranteed minimum sizes. So, you will not find a compliant C implementation where int is 2 bits, it has to be at least 16.

Differences between platforms makes porting software into the interesting challenge it often is.

If you have code that assumes things about basic data types that are not guaranteed to be true (for instance, code that does something like this: int x = 0xfeedf00d;), then that code will not be portable. It will break, in various often hard to predict ways, when compiled on a platform that doesn't match the assumptions. For instance, on a platform where int is 16 bits, the above code would leave x set to some different value from what the programmer intended.