Is parallel programming == multithread programming? Is parallel programming == multithread programming? multithreading multithreading

Is parallel programming == multithread programming?


Multithreaded programming is parallel, but parallel programming is not necessarily multithreaded.

Unless the multithreading occurs on a single core, in which case it is only concurrent.


Not necessarily. You can distribute jobs between multiple processes and even multiple machines - I wouldn't class that as "multi-threaded" programming as each process may only use a single thread, but it's certainly parallel programming. Admittedly you could then argue that with multiple processes there are multiple threads within the system as a whole...

Ultimately, definitions like this are only useful within a context. In your particular case, what difference is it going to make? Or is this just out of interest?


No. multithread programming means that you have a single process, and this process generates a bunch of threads. All the threads are running at the same time, but they are all under the same process space: they can access the same memory, have the same open file descriptors, and so on.

Parallel programming is a bit more "general" as a definition. in MPI, you perform parallel programming by running the same process multiple times, with the difference that every process gets a different "identifier", so if you want, you can differentiate each process, but it is not required. Also, these processes are independent from each other, and they have to communicate via pipes, or network/unix sockets. MPI libraries provide specific functions to move data to-and-fro the nodes, in synchronous or asynchronous style.

In contrast, OpenMP achieves parallelization via multithreading and shared-memory. You specify special directives to the compiler, and it automagically performs parallel execution for you.

The advantage of OpenMP is that it is very transparent. Have a loop to parallelize? just add a couple of directives and the compiler chunks it in pieces, and assign each piece of the loop to a different processor. Unfortunately, you need a shared-memory architecture for this. Clusters having a node-based architecture are not able to use OpenMP at the cluster level. MPI allows you to work on a node-based architecture, but you have to pay the price of a more complex and not transparent usage.