Misunderstanding the difference between single-threading and multi-threading programming Misunderstanding the difference between single-threading and multi-threading programming multithreading multithreading

Misunderstanding the difference between single-threading and multi-threading programming


It depends.

How many CPUs do you have? How much I/O is involved in your tasks?

  1. If you have only 1 CPU, and the tasks have no blocking I/O, then the single threaded will finish equal to or faster than multi-threaded, as there is overhead to switching threads.

  2. If you have 1 CPU, but the tasks involve a lot of blocking I/O, you might see a speedup by using threading, assuming work can be done when I/O is in progress.

  3. If you have multiple cpus, then you should see a speedup with the multi-threaded implementation over the single-threaded since more than 1 thread can execute in parallel. Unless of course the tasks are I/O dominated, in which case the limiting factor is your device speed, not CPU power.


As I understand, only ONE thread will be executed at a time

That would be the case if the CPU only had one core. Modern CPUs have multiple cores, and can run multiple threads in parallel.

The program running three threads would run almost three times faster. Even if the tasks are independent, there are still some resources in the computer that has to be shared between the threads, like memory access.


Assumption Set:Single core with no hyperthreading; tasks are CPU bound; Each task take 3 quanta of time; Each scheduler allocation is limited to 1 quanta of time; FIFO scheduler Nonpreemptive; All threads hit the scheduler at the same time; All context switches require the same amount of time;

Processes are delineated as follows:

  • Test 1: Single Process, single thread (contains all 9 tasks)
  • Test 2: Single Process, three threads (contain 3 tasks each)
  • Test 3: Three Processes, each single threaded (contain 3 tasks each)
  • Test 4: Three Processes, each with three threads (contain one task each)

With the above assumptions, they all finish at the same time. This is because there is an identicle amount of time scheduled for the CPU, context switches are identicle, there is no interrupt handling, and nothing is waiting for IO.

For more depth into the nature of this, please find this book.