Who decides the sizeof any datatype or structure (depending on 32 bit or 64 bit)? Who decides the sizeof any datatype or structure (depending on 32 bit or 64 bit)? c c

Who decides the sizeof any datatype or structure (depending on 32 bit or 64 bit)?


It's ultimately the compiler. The compiler implementors can decide to emulate whatever integer size they see fit, regardless of what the CPU handles the most efficiently. That said, the C (and C++) standard is written such, that the compiler implementor is free to choose the fastest and most efficient way. For many compilers, the implementers chose to keep int as a 32 bit, although the CPU natively handles 64 bit ints very efficiently.

I think this was done in part to increase portability towards programs written when 32 bit machines were the most common and who expected an int to be 32 bits and no longer. (It could also be, as user user3386109 points out, that 32 bit data was preferred because it takes less space and therefore can be accessed faster.)

So if you want to make sure you get 64 bit ints, you use int64_t instead of int to declare your variable. If you know your value will fit inside of 32 bits or you don't care about size, you use int to let the compiler pick the most efficient representation.

As for the other datatypes such as struct, they are composed from the base types such as int.


It's not the CPU, nor the compiler, nor the operating system. It's all three at the same time.

The compiler can't just make things up. It has to adhere to the right ABI[1] that that the operating system provides. If structs and system calls provided by the operating system have types with certain sizes and alignment requirements the compiler isn't really free to make up its own reality unless the compiler developers want to reimplement wrapper functions for everything the operating system provides. Then the ABI of the operating system can't just be completely made up, it has to do what can be reasonably done on the CPU. And very often the ABI of one operating system will be very similar to other ABIs for other operating systems on the same CPU because it's easier to just be able to reuse the work they did (on compilers among other things).

In case of computers that support both 32 bit and 64 bit code there still needs to be work done by the operating system to support running programs in both modes (because the system has to provide two different ABIs). Some operating systems don't do it and on those you don't have a choice.

[1] ABI stands for Application Binary Interface. It's a set of rules for how a program interacts with the operating system. It defines how a program is stored on disk to be runnable by the operating system, how to do system calls, how to link with libraries, etc. But for being able to link to libraries for example, your program and the library have to agree on how to make function calls between your program an the library (and vice versa) and to be able to make function calls both the program and the library have to have the same idea of stack layout, register usage, function call conventions, etc. And for function calls you need to agree on what the parameters mean and that includes sizes, alignment and signedness of types.


It is strictly, 100%, entirely the compiler that decides the value of sizeof(int). It is not a combination of the system and the compiler. It is just the compiler (and the C/C++ language specifications).

If you develop iPad or iPhone apps you do the compiler runs on your Mac. The Mac and the iPhone/iPac use different processors. Nothing about your Mac tells the compiler what size should be used for int on the iPad.