Why does malloc initialize the values to 0 in gcc? Why does malloc initialize the values to 0 in gcc? c c

Why does malloc initialize the values to 0 in gcc?


Short Answer:

It doesn't, it just happens to be zero in your case.
(Also your test case doesn't show that the data is zero. It only shows if one element is zero.)


Long Answer:

When you call malloc(), one of two things will happen:

  1. It recycles memory that was previous allocated and freed from the same process.
  2. It requests new page(s) from the operating system.

In the first case, the memory will contain data leftover from previous allocations. So it won't be zero. This is the usual case when performing small allocations.

In the second case, the memory will be from the OS. This happens when the program runs out of memory - or when you are requesting a very large allocation. (as is the case in your example)

Here's the catch: Memory coming from the OS will be zeroed for security reasons.*

When the OS gives you memory, it could have been freed from a different process. So that memory could contain sensitive information such as a password. So to prevent you reading such data, the OS will zero it before it gives it to you.

*I note that the C standard says nothing about this. This is strictly an OS behavior. So this zeroing may or may not be present on systems where security is not a concern.


To give more of a performance background to this:

As @R. mentions in the comments, this zeroing is why you should always use calloc() instead of malloc() + memset(). calloc() can take advantage of this fact to avoid a separate memset().


On the other hand, this zeroing is sometimes a performance bottleneck. In some numerical applications (such as the out-of-place FFT), you need to allocate a huge chunk of scratch memory. Use it to perform whatever algorithm, then free it.

In these cases, the zeroing is unnecessary and amounts to pure overhead.

The most extreme example I've seen is a 20-second zeroing overhead for a 70-second operation with a 48 GB scratch buffer. (Roughly 30% overhead.)(Granted: the machine did have a lack of memory bandwidth.)

The obvious solution is to simply reuse the memory manually. But that often requires breaking through established interfaces. (especially if it's part of a library routine)


The OS will usually clear fresh memory pages it sends to your process so it can't look at an older process' data. This means that the first time you initialize a variable (or malloc something) it will often be zero but if you ever reuse that memory (by freeing it and malloc-ing again, for instance) then all bets are off.

This inconsistence is precisely why uninitialized variables are such a hard to find bug.


As for the unwanted performance overheads, avoiding unspecified behaviour is probably more important. Whatever small performance boost you could gain in this case won't compensate the hard to find bugs you will have to deal with if someone slightly modifies the codes (breaking previous assumptions) or ports it to another system (where the assumptions might have been invalid in the first place).


Why do you assume that malloc() initializes to zero? It just so happens to be that the first call to malloc() results in a call to sbrk or mmap system calls, which allocate a page of memory from the OS. The OS is obliged to provide zero-initialized memory for security reasons (otherwise, data from other processes gets visible!). So you might think there - the OS wastes time zeroing the page. But no! In Linux, there is a special system-wide singleton page called the 'zero page' and that page will get mapped as Copy-On-Write, which means that only when you actually write on that page, the OS will allocate another page and initialize it. So I hope this answers your question regarding performance. The memory paging model allows usage of memory to be sort-of lazy by supporting the capability of multiple mapping of the same page plus the ability to handle the case when the first write occurs.

If you call free(), the glibc allocator will return the region to its free lists, and when malloc() is called again, you might get that same region, but dirty with the previous data. Eventually, free() might return the memory to the OS by calling system calls again.

Notice that the glibc man page on malloc() strictly says that the memory is not cleared, so by the "contract" on the API, you cannot assume that it does get cleared. Here's the original excerpt:

malloc() allocates size bytes and returns a pointer to the allocated memory.
The memory is not cleared. If size is 0, then malloc() returns either NULL, or a unique pointer value that can later be successfully passed to free().

If you would like, you can read more about of that documentation if you are worried about performance or other side-effects.