Why does reusing arrays increase performance so significantly in c#? Why does reusing arrays increase performance so significantly in c#? arrays arrays

Why does reusing arrays increase performance so significantly in c#?


Just because something can be collected doesn't mean that it will. In fact, were the garbage collector as aggressive as that in its collection, your performance would be significantly worse.

Bear in mind that creating an array is not just creating one variable, it's creating N variables (N being the number of elements in the array). Reusing arrays is a good bang-for-your-buck way of increasing performance, though you have to do so carefully.

To clarify, what I mean by "creating variables" specifically is allocating the space for them and performing whatever steps the runtime has to in order to make them usable (i.e. initializing the values to zero/null). Because arrays are reference types, they are stored on the heap, which makes life a little more complicated when it comes to memory allocation. Depending on the size of the array (whether or not it's over 85KB in total storage space), it will either be stored in the normal heap or the Large Object Heap. An array stored on the ordinary heap, as with all other heap objects, can trigger garbage collection and compaction of the heap (which involves shuffling around currently in-use memory in order to maximize contiguous available space). An array stored on the Large Object Heap would not trigger compaction (as the LOH is never compacted), but it could trigger premature collection by taking up another large contiguous block of memory.


One answer could be the large object heap - objects greater than 85KB are allocated on a different LOH, that is less frequently collected and not compacted.

See the section on performance implications

  • there is an allocation cost (primarily clearing out the allocated memory)
  • the collection cost (LOH and Gen2 are collected together - causing compaction of large objects in Gen2)


It's not always easy to allocate large blocks of memory in the presence of fragmentation. I can't say for sure, but my guess is that it's having to do some rearranging to get enough contiguous memory for such a big block of memory. As for why allocating subsequent arrays isn't faster, my guess is either that the big block gets fragmented between GC time and the next allocation OR the original block was never GCd to start with.