C++11: std::thread pooled? C++11: std::thread pooled? multithreading multithreading

C++11: std::thread pooled?


Generally, std::thread should be a minimal wrapper around underlying system primitive. For example, if you're on pthread platform, you can test with the following program that no matter how many threads you create, they are all created with unique pthread_t ids (which implies they're created on the fly and not borrowed from a thread pool):

#include <assert.h>#include <mutex>#include <set>#include <thread>#include <vector>#include <pthread.h>int main() {  std::vector<std::thread> workers;  std::set<long long> thread_ids;  std::mutex m;  const int n = 1024;  for (int i = 0; i < n; ++i) {    workers.push_back(std::thread([&] {      std::lock_guard<std::mutex> lock(m);      thread_ids.insert(pthread_self());    }));  }  for (auto& worker : workers) {    worker.join();  }  assert(thread_ids.size() == n);  return 0;}

So thread pools still make perfect sense. That said, I've seen a video where C++ committee members discussed thread pools with regard to std::async (IIRC), but I can't find it right now.


A std::thread is a thread of execution. Period. Where it comes from, how it gets there, whether there is some pool of "actual" threads, etc, is all irrelevant to the standard. As long as it acts like a thread, it could be a std::thread.

Now, odds are good that std::thread is a real-life OS thread, not something pulled from a thread pool or whatever. But C++11 does theoretically allow a std::thread to be implemented as a something pulled from a pool.


std::thread is supposed to come extremely cheaply in terms of abstraction costs, it's low level stuff. As I understand it, standard library implementations are probably going to just wrap the underlying OS mechanisms as closely as possible so you can assume the overhead of thread creation to be similar or equivalent.

I don't know about any specific implementations but it is my secondhand understanding from reading C++ Concurrency In Action that the standard suggests they use the most efficient method practical. The author certainly seemed to think the cost would be more or less negligible compared to DIY.

The library's similar to Boost conceptually so I imagine using the Boost implementation to draw some conclusions wouldn't be too farfetched.

Basically, I don't think there's an answer for your question directly because it just isn't specified. While it sounds to me that we're gonna be more likely to see very thin wrapper implementations, I don't think library writers are restricted from using thread pools if it offers efficiency benefits.