Managing threads while practicing modern c++17's best practices Managing threads while practicing modern c++17's best practices multithreading multithreading

Managing threads while practicing modern c++17's best practices


Threads and other std primitives are like raw pointers. You should build up a concurrency model that doesn't expose anything that low level. The std threading primitives provide you with enough tools to do so.

Learn about some of the new stuff headed down the pipe -- executors, streams, coroutines, rangesv3, monadic futures, etc. Model your library around that.

Trying to make well behaved code based around raw use of mutexes, threads going to sleep and waking up, blocking, atomics and shared data is a trap.

As an example:

struct thread_pool;template<class T>struct my_future:std::future<T> {  template<class F>  auto then( F&& f )&&  -> std::future< std::result_of_t<F(T&&)> >;  thread_pool* executor = 0;};template<>struct my_future<void>:std::future<void> {  template<class F>  auto then( F&& f )&&  -> std::future< std::result_of_t<F()> >;  thread_pool* executor = 0;};struct thread_pool {  template<class F=do_nothing>  my_future<std::result_of_t<F()>> do_task(F&& f={});};

here we talk about piping data from task to task to task and ending in an augmented future<T>. Augment it with the ability to split (via shared_future) and merge (future<X> joined with future<Y> to produce future<X, Y>).

Maybe go a step further and build a stream based system:

template<class In>using sink = std::function<void(In)>;template<class Out>using source = std::function<sink<Out>>;template<class In, class Out>using pipe = std::function< source<In>, sink<Out> >;

and then support turning a source into an async source.

And instead of building up some huge castle of abstraction and hoping it is complete, read about these things, and when you run into a problem that one of these things solve implement just enough to solve your problem. You aren't writing the be-all end-all thread system from scratch the first time you try, so don't try. Write something useful, and then write a better one next time.


After taking user Yakk's advice and doing some more research into the behavior of mutex, lock_guard, thread, etc. I found this video on https://www.youtube.com on a ThreadPool class.

Here is a working piece of code:

ThreadPool.h

#ifndef THREAD_POOL_H#define THREAD_POOL_H#include <vector>#include <queue>#include <functional>#include <condition_variable>#include <thread>#include <future>namespace linx {class ThreadPool final {public:    using Task = std::function<void()>;private:        std::vector<std::thread> _threads;    std::queue<Task>         _tasks;    std::condition_variable  _event;    std::mutex               _eventMutex;    bool                     _stopping = false;    public:    explicit ThreadPool( std::size_t numThreads ) {        start( numThreads );    }        ~ThreadPool() {        stop();    }    ThreadPool( const ThreadPool& c ) = delete;    ThreadPool& operator=( const ThreadPool& c ) = delete;    template<class T>    auto enqueue( T task )->std::future<decltype(task())> {        auto wrapper = std::make_shared<std::packaged_task<decltype(task()) ()>>( std::move( task ) );        {            std::unique_lock<std::mutex> lock( _eventMutex );            _tasks.emplace( [=] {                (*wrapper)();            } );        }        _event.notify_one();        return wrapper->get_future();    }private:    void start( std::size_t numThreads ) {        for( auto i = 0u; i < numThreads; ++i ) {            _threads.emplace_back( [=] {                while( true ) {                    Task task;                    {                        std::unique_lock<std::mutex> lock{ _eventMutex };                        _event.wait( lock, [=] { return _stopping || !_tasks.empty(); } );                        if( _stopping && _tasks.empty() )                            break;                        task = std::move( _tasks.front() );                        _tasks.pop();                    }                    task();                }            } );        }    }    void stop() noexcept {        {            std::unique_lock<std::mutex> lock{ _eventMutex };            _stopping = true;        }        _event.notify_all();        for( auto& thread : _threads )            thread.join();    }};} // namespace linx#endif // !THREAD_POOL_H

main.cpp

#include <iostream>#include <sstream>#include "ThreadPool.h"int main() {    {        ThreadPool pool{ 4 }; // 4 threads        auto f1 = pool.enqueue( [] {            return 2;        } );        auto f2 = pool.enqueue( [] {            return 4;        } );        auto a = f1.get();        auto b = f2.get();        auto f3 = pool.enqueue( [&] {            return a + b;        } );        auto f4 = pool.enqueue( [] {           return a * b;        } );        std::cout << "f1 = " << a << '\n' <<                  << "f2 = " << b << '\n' <<                  << "f3 = " << f3.get() << '\n' <<                  << "f4 = " << f4.get() << '\n';    }                     std::cout << "\nPress any key and enter to quit.\n";    std::cin.get();    return 0;}

I think this is something that might serve my purposes. I did a basic simple example here, but in my own IDE using a couple of my other classes I wrapped my Execution Timer around my thread pool object that had 4 lambdas like above, except that the first lambda used my other class to generate 1 million random integer values using mt19937 seeding it with a random device between [1,1000], my second lambda did the same as above except it used mt19937 seeding it with chrono::high_resolution_clock for floating point numbers from (0, 1.0). The third and forth lambdas, following the pattern above took the results and saved them to an ostringstream and I returned back that stream. Then I printed the results. The time execution on my pc: Intel Quad Core Extreme 3.0Ghz, 8GB ram, running Win 7 64bit home premium took about 1720 milliseconds to generate a million random values for each case using 4 threads, and my pc was utilizing all four cores.