Is there a way to locate which part of the process used the most of the memory, only looking at a generated core file? Is there a way to locate which part of the process used the most of the memory, only looking at a generated core file? unix unix

Is there a way to locate which part of the process used the most of the memory, only looking at a generated core file?


Open this coredump in hexadecimal format (as bytes/words/dwords/qwords). Starting from the file's middle try to notice any repeating pattern. If anything is found, try to determine starting address and the length of some possible data structure. Using length and contents of this structure, try to guess what might it be. Using the address, try to find some pointer to this structure. Repeat until you come to either stack or some global variable. In case of stack variable, you'll easily know in which function this chain starts. In case of global variable, you know at least its type.

If you cannot find any pattern in the coredump, chances are that leaking structure is very big. Just compare what you see in the file with possible contents of all large structures in the program.

Update

If your coredump has valid call stack, you can start with inspecting its functions. Search for anything unusual. Check if memory allocations near the top of the call stack do not request too much. Check for possible infinite loops in the call stack functions.

Words "only smart pointers are used" frighten me. If significant part of these smart pointers are shared pointers (shared_ptr, intrusive_ptr, ...), instead of searching for huge containers, it is worth to search for shared pointer cycles.

Update 2

Try to determine where your heap ends in the corefile (brk value). Run coredumped process under gdb and use pmap command (from other terminal). gdb should also know this value, but I have no idea how to ask it... If most of the process' memory is above brk, you can limit your search by large memory allocations (most likely, std::vector).

To improve chances of finding leaks in heap area of the existing coredump, some coding may be used (I didn't do it myself, just a theory):

  • Read coredump file, interpreting each value as a pointer (ignore code segment, unaligned values, and pointers to non-heap area). Sort the list, calculate differences of adjacent elements.
  • At this point whole memory is split to many possible structures. Compute a histogram of structure's sizes, drop any insignificant values.
  • Calculate difference of addresses of pointers and structures, where these pointers belong. For each structure size, compute a histogram of pointers' displacement, again drop any insignificant values.
  • Now you have enough information to guess structure types or to construct a directed graph of structures. Find source nodes and cycles of this graph. You can even visualize this graph as in "list “cold” memory areas".

Coredump file is in elf format. Only start and size of data segment is needed from its header. To simplify process, just read it as linear file, ignoring structure.


Once I saw it's eating up the memory - with memory usage about 1.5-2GB

Quite often this would be an end result of an error loop going astray. Something like:

size_t size = 1;p = malloc(size);while (!enough_space(size)) {  size *= 2;  p = realloc(p, size);}// now use p to do whatever

If enough_space() erroneously returns false under some conditions, your process will quickly grow to consume all memory available.

only smart pointers are used

Unless you control all code linked into the process, above statement is false. The error loop could be inside libc, or any other library that you don't own.

only guessing may help

That's pretty much it. Evgeny's answer has good starting points to help you guess.


Normal memory allocators don't keep track which part of the process allocated memory - after all, the memory will be freed anyway and pointers are held by the client code. If the memory has truly leaked (i.e. there are no pointers to it left), you have pretty much lost and are looking at a huge block of unstructured memory.