Why threads starve even on preemptive multitasking OS (Windows 7) Why threads starve even on preemptive multitasking OS (Windows 7) multithreading multithreading

Why threads starve even on preemptive multitasking OS (Windows 7)


Round-robin scheduling is an obvious strategy for a kernel. That's however not the way that the Windows scheduler works. It used to, back in the Windows 9x days, a scheduler which was very capable of giving various VMs equal time. But not in the NT branch, started by Dave Cutler's group, scheduling is purely based on priority.

Whatever thread has the highest priority gets the cpu. There's another chunk of code in Windows that tinkers with a thread's priority, modifying it from the default priority it gets when the thread got created. That code is aware of stuff like a thread owning a window that's in the foreground. Or a thread that's waiting for a synchronization object that got signaled. Or the more bizarre scheduling problems that tries to solve a priority inversion problem. Randomly giving a thread a chance to run even though it wasn't its turn.

Focus on writing sane code first. Starting a hundred threads isn't a very sane thing to do. You are trying to consume resources that the machine doesn't actually have available, nobody has a machine with a hundred cores. Yet. Powers of two, get a machine with 128 cores first.


I have reproduced and confirm your results. Additionally, disabling thread priority boost doesn't change the distribution. GetThreadTimes reports that threads with higher Values took more UserTime and vice versa, while KernelTime seems to have no correlation with Values.

Thread 97: 1081,5928 Ke:0 Us:25116161Thread 98: 1153,8029 Ke:0 Us:26988173Thread 99: 704,6996  Ke:0 Us:16848108

Clearly, some threads really get to run more often than others.

I haven't graphed the results, but I suppose what we're seeing is a Normal distribution, which means the results depend on a number of factors, some which are random.

I tried disabling hyper-threading (this kinda smoothed the results), then assigning each thread a single physical processor (by using SetThreadAffinityMask). In the second case, Values were much closer to each other.

SetThreadAffinityMask(Self.Handle, 1 shl (FIndex mod 4));

I can sort of understand how running on a hyper-threaded system can make some threads "unlucky": they are scheduled to compete with other threads on the same physical processor, and because of "soft affinity" to this virtual core they get to run on it again and again, thus scoring lower than others.

But as to why binding each thread to a fixed core helps on a non-hyperthreaded system, I don't know.

There are probably other random things involved, such as the activity on the cores by other processes. Thread can get "unlucky" if some other process' thread associated with the same core suddenly wakes up and starts doing some (relatively) heavy work.

All of this is guessing though.


Windows 7 is designed for user land. When your first thread wants to do work, the OS gives it a time slice. You, the user, just started it after all. By the time the 50th thread in succession (from the same process !) wants to do work, higher priority threads (background processes controlled by Windows 7 itself) step in. This is happening in such a fashion as to make some threads luckier.

You and I don't really want a personal OS that hands out CPU time based on the whims of user land processes. I would be curious to see how 2008 R2 server handled this. You also might play around with the Advanced tab setting: "Choose how to allocate processor resources".