Difference b/w hyper threading and multithreading? Difference b/w hyper threading and multithreading? multithreading multithreading

Difference b/w hyper threading and multithreading?


Hyperthreading is a hardware thing and Intel branding. Most other people call it Simultaneous Multithreading (SMT). To the programmer, two hyperthreads look like two CPU cores. On the hardware side, multiple hyperthreads share a single core. (In the case of intel, there are two hyperthreads per core).

Multithreading (or multithreaded programming) is generally considered the concept of using more than one thread context (instruction pointer, registers, stack, etc.) in a single program. (Usually in the same process or virtual address space).


A physical processor (PP) is the hardware implementation of a single processing unit. From this perspective, a "core" is the basic PP. Sometimes, terms such as multi-processor, multi-core are used to differentiate how processing units are organized in chips, and what other physical resources are shared among them, like L2, buses, etc. But for this answer, we are interested in the most basic processing unit.

When a PP supports hyperthreading (let's just use this term for now), the PP is split into two or more logical processors (LP). This is done by beefing up the execution pipeline, duplicating PP resources like register set, PC, interrupt handling mechanism, and others. This allows the PP to hold and execute several "execution context" at the "same time". These execution context are sometimes called hardware threads (HT). If the PP does not support hyperthreading (or it's turned off), the LP is the same as the PP.

A software thread (ST) is an execution context created by the software, for instance with pthread_create() or clone(). These entities are scheduled by the operating system onto processors. A multithreaded program is a code in which the programmer explicitly creates ST. A multithreaded program can run in a processor that does not support hyperthreading. In this case, context switching among ST is expensive, because it requires the intervention of the scheduler and the use of memory to store and load execution contexts.

When hyperthreading is on, the OS schedules several ST to one PP. Usually one ST per LP. The OS sees LPs as if they were real PP. Thus, each ST will run on a different LP. Once STs have been scheduled, we can say they become hardware threads (HT) (loosely speaking) in the sense that the PP takes control. When one HT stalls, for instance on a cache miss or pipeline flushing, the PP executes other HT. This "context-switch" costs almost nothing since the HT's context is already in the PP. The OS is not involved in these context-switchings. What is most relevant, it is that these stalls and corresponding context-switches can happen in many stages of the pipeline. This is different to scheduler-based context switching which happen on interrupt-based events, such as quantum expiration, I/O interrup, abort, system calls, etc.

As Nathan says in the previous answer, hyperthreading is a a very specific term. A more general and agnostic term is "Simultaneous Multithreading (SMT)".

Finally, I strongly recommend reading:1) Operating system support for simultaneous multithreaded processors. James R. Bulpin

2) Microarchitecture choices and tradeoffs for maximizing processing efficiency. Deborah T. Marr (Ph.D. dissertation)