Performance difference between IPC shared memory and threads memory Performance difference between IPC shared memory and threads memory linux linux

Performance difference between IPC shared memory and threads memory


1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost, relative to the number of shm accesses. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, as in a regular process that does not access shared memory.

There is no overhead compared to regular memory access aside from the initial cost to set up shared pages - populating the page-table in the process that calls shmat() - in most flavours of Linux that's 1 page (4 or 8 bytes) per 4KB of shared memory.

It's (to all relevant comparison) the same cost whether the pages are allocated shared or within the same process.

2) The shared memory segment must be maintained somehow by the kernel. I do not know what that 'somehow' means in terms of performances, but for example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There must be at least some degree of overhead related to the things the kernel needs to check during the lifetime of the shm segment.

Whether shared or not, each page of memory has a "struct page" attached to it, with some data about the page. One of the items is a reference count. When a page is given out to a process [whether it is through "shmat" or some other mechanism], the reference count is incremented. When it is freed through some means, the reference count is decremented. If the decremented count is zero, the page is actually freed - otherwise "nothing more happens to it".

The overhead is basically zero, compared to any other memory allocated. The same mechanism is used for other purposes for pages anyways - say for example you have a page that is also used by the kernel - and your process dies, the kernel needs to know not to free that page until it's been released by the kernel as well as the user-process.

The same thing happens when a "fork" is created. When a process is forked, the entire page-table of the parent process is essentially copied into the child process, and all pages made read-only. Whenever a write happens, a fault is taken by the kernel, which leads to that page being copied - so there are now two copies of that page, and the process doing the writing can modify it's page, without affecting the other process. Once the child (or parent) process dies, of course all pages still owned by BOTH processes [such as the code-space that never gets written, and probably a bunch of common data that never got touched, etc] obviously can't be freed until BOTH processes are "dead". So again, the reference counted pages come in useful here, since we only count down the ref-count on each page, and when the ref-count is zero - that is, when all processes using that page has freed it - the page is actually returned back as a "useful page".

Exactly the same thing happens with shared libraries. If one process uses a shared library, it will be freed when that process ends. But if two, three or 100 processes use the same shared library, the code obviously will have to stay in memory until the page is no longer needed.

So, basically, all pages in the whole kernel are already reference counted. There is very little overhead.


If one considers what is happening at the microelectronics level when two threads or processes are accessing the same memory, there's some interesting consequences.

The point of interest is how the architecture of the CPU allows multiple cores (thus threads and processes) access the same memory. This is done through the L1 caches, then the L2, L3 and finally DRAM. There's an awful lot of coordination has to go on between the controllers of all of that.

For a machine with 2 CPUs or more, that coordination takes place over a serial bus. If one compares the bus traffic that takes place when two cores are accessing the same memory, and when data is being copied to another piece of memory, it's about the same amount of traffic.

So depending on where in a machine the two threads are running, there can be little speed penalty to copying the data vs sharing it.

Copying might be 1) a memcpy, 2) a pipe write, 3) an internal DMA transfer (Intel chips can do this these days).

An internal DMA is interesting because it requires zero CPU time (a naive memcpy is just a loop, actually takes time). So if one can copy data instead of sharing data, and one does this with an internal DMA, you can be just as fast as if you were sharing data.

The penalty is more RAM, but the payback is that things like Actor model programming are in play. This is a way to remove all the complexity of guarding shared memory with semaphores from your program.


Setting up the shared memory requires some extra work by the kernel, so attaching/detaching a shared memory region from your process may be slower than a regular memory allocation (or it may not be... I've never benchmarked that). But, once it's attached to your processes virtual memory map, shared memory is no different than any other memory for accesses, except in the case where you have multiple processors contending for the same cache-line sized chunks. So, in general, shared memory should be just as fast as any other memory for most accesses, but, depending on what you put there, and how many different threads/processes access it, you can get some slowdown for specific usage patterns.