How does this program work? How does this program work? c c

How does this program work?


That's because %d expects an int but you've provided a float.

Use %e/%f/%g to print the float.


On why 0 is printed: The floating point number is converted to double before sending to printf. The number 1234.5 in double representation in little endian is

00 00 00 00  00 4A 93 40

A %d consumes a 32-bit integer, so a zero is printed. (As a test, you could printf("%d, %d\n", 1234.5f); You could get on output 0, 1083394560.)


As for why the float is converted to double, as the prototype of printf is int printf(const char*, ...), from 6.5.2.2/7,

The ellipsis notation in a function prototype declarator causes argument type conversion to stop after the last declared parameter. The default argument promotions are performed on trailing arguments.

and from 6.5.2.2/6,

If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions.

(Thanks Alok for finding this out.)


Technically speaking there is no the printf, each library implements its own, and therefore, your method of trying to study printf's behavior by doing what you are doing is not going to be of much use. You could be trying to study the behavior of printf on your system, and if so, you should read the documentation, and look at the source code for printf if it is available for your library.

For example, on my Macbook, I get the output 1606416304 with your program.

Having said that, when you pass a float to a variadic function, the float is passed as a double. So, your program is equivalent to having declared a as a double.

To examine the bytes of a double, you can see this answer to a recent question here on SO.

Let's do that:

#include <stdio.h>int main(void){    double a = 1234.5f;    unsigned char *p = (unsigned char *)&a;    size_t i;    printf("size of double: %zu, int: %zu\n", sizeof(double), sizeof(int));    for (i=0; i < sizeof a; ++i)        printf("%02x ", p[i]);    putchar('\n');    return 0;}

When I run the above program, I get:

size of double: 8, int: 400 00 00 00 00 4a 93 40 

So, the first four bytes of the double turned out to be 0, which may be why you got 0 as the output of your printf call.

For more interesting results, we can change the program a bit:

#include <stdio.h>int main(void){    double a = 1234.5f;    int b = 42;    printf("%d %d\n", a, b);    return 0;}

When I run the above program on my Macbook, I get:

42 1606416384

With the same program on a Linux machine, I get:

0 1083394560


The %d specifier tells printf to expect an integer. So the first four (or two, depending on the platform) bytes of the float are intepreted as an integer. If they happen to be zero, a zero is printed

The binary representation of 1234.5 is something like

1.00110100101 * 2^10 (exponent is decimal ...)

With a C compiler which represents float actually as IEEE754 double values, the bytes would be (if I made no mistake)

01000000 10010011 01001010 00000000 00000000 00000000 00000000 00000000

On an Intel (x86) system with little endianess (i.e. the least significant byte coming first), this byte sequence gets reversed so that the first four bytes are zero. That is, what printf prints out ...

See This Wikipedia article for floating point representation according to IEEE754.