Thread pool with coroutines: Threads (1/3)

,

Introduction

In this little series of articles, we are going to see how to implement a thread pool usable with coroutines. This series will contain these articles :

  1. Creating a Thread
  2. Creating the pool
  3. Using future with the pool

The final objective will be to be able to write something like that:

ThreadPool threadPool;

co_await threadPool;
// Here we run on a threadPool thread

auto future = schedule_on(threadPool, function, args...);

co_await future;
// Here we run on the asynchronous function thread

Choice of implementation for the thread pool

We will use the well-known work-stealing algorithm inside our thread pool. It implies that each thread has its own task queue and threads can steal tasks from each other. It will lead to concurrency with shared variables between threads, hence, we must be careful with data races.

To deal with data races, I decided to make some homemade helpers inspired by the Rust programming language.

Mutex

Here is the first helper I have made. A mutex protects a resource from data race. So we can make a template class to protect the template argument. We use a callback to operate on a protected variable.

#define FWD(x) std::forward<decltype(x)>(x)

template <typename T> class Mutex {
public:
  template <typename F> decltype(auto) with_lock(F &&f) {
    std::lock_guard lock{m_mutex};
    return std::invoke(FWD(f), m_value);
  }
  template <typename F> decltype(auto) with_lock(F &&f) const {
    std::shared_lock lock{m_mutex};
    return std::invoke(FWD(f), m_value);
  }

protected:
  T m_value;
  mutable std::shared_mutex m_mutex;
};

Why do I use a shared mutex? I use a shared mutex because multiple readers are not an issue.

Condition variable

What are the events that can occured within a thread?

  1. The thread can be requested to stop
  2. The thread can have a new task to perform

To not use CPU resources when the thread is not fed (i.e, there is no task to run), I decided to use a condition variable. The idea is simple, on the waiting thread, you wait for an event, and you go out of the wait function when the predicate is satisfied, and in another thread, you notify the condition variable to wake up.

Since a condition variable is generally used with a Mutex, I decided to join them together through inheritance. Hence, a condition variable behaves like a mutex but can be waited on also.

template <typename T> class ConditionVariable : public Mutex<T> {
public:
  void notifyOne() { m_cond.notify_one(); }
  void notifyAll() { m_cond.notify_all(); }

  template <typename F> void wait(F f) {
    auto lock = std::unique_lock{this->m_mutex};
    m_cond.wait(lock, [&, this] { return std::invoke(f, this->m_value); });
  }

  template <typename F> void wait(F f, std::stop_token st) {
    auto lock = std::unique_lock{this->m_mutex};
    m_cond.wait(lock, st, [&, this] { return std::invoke(f, this->m_value); });
  }

private:
  std::condition_variable_any m_cond;
};

You may wonder what is std::stop_token, it is simply a C++20 feature provided by std::jthread that avoid user to wait on an atomic boolean. Put it simply, a std::jthread, when it is destroyed, do two things:

  1. It calls request_stop to a std::stop_source that will notify the std::stop_token
  2. It joins the thread

An Awaiter

With coroutines, the task will not be a function, but a coroutine_handle which will be resumed. Hence, we need to have an object that manages this handle.

struct Awaiter {
public:
  Awaiter() {}

  ~Awaiter() {
    if (m_handle)
      m_handle.destroy();
  }

  template <typename... Args>
  Awaiter(std::coroutine_handle<Args...> handle) : m_handle{handle} {}

  Awaiter(const Awaiter &) = delete;
  Awaiter(Awaiter &&a) : m_handle{a.m_handle} { a.m_handle = nullptr; }


  void resume() {
    m_handle();
    m_handle = nullptr;
  }

private:
  std::coroutine_handle<> m_handle = nullptr;
};

One will observe that we destroy the coroutine only if it was not resumed. It is a movable only type.

A thread safe queue

Now that we have our Awaiter object, we must push them into a thread-safe queue. The new tasks will be pushed into the queue, and the thread pool will pop them one by one.

Since the queue may be empty, the pop operation can return nothing, represented by a std::nullopt.

template <typename T> class ThreadSafeQueue {
public:
  void push(T t) {
    m_queue.with_lock([&](auto &queue) { queue.push(std::move(t)); });
    m_queue.notifyOne();
  }

  std::optional<T> pop() {
    std::optional<T> x;
    m_queue.with_lock([&](auto &queue) {
      if (!queue.empty()) {
        x.emplace(std::move(queue.front()));
        queue.pop();
      }
    });
    return x;
  }

  void waitForAnElement(std::stop_token st) {
    auto hasElement = [](const auto &x) { return !x.empty(); };
    m_queue.wait(hasElement, st);
  }

private:
  ConditionVariable<std::queue<T>> m_queue;
};

We have 3 operations possible.

  1. Push: this operation enqueue a new task and notify the condition variable
  2. Pop: This operation deque a task to be executed in the current thread.
  3. Wait for an element: This operation will make the current thread idle until we got a new task (notified by the push function)

The thread

It is time to design our thread class.

The thread class will be designed over the std::jthread class. It will also embed a thread-safe queue of Awaiters.

Thus, we can lay:

class Thread {
private:
  ThreadSafeQueue<Awaiter> m_awaiters;
  std::jthread m_thread;
};

First, we can imagine what operation our thread must do:

  1. Adding tasks
  2. Schedule operation (thanks to the co_await operator)
  3. A background infinite loop that will pop tasks and execute them.
public:
  Thread() {
    m_thread = std::jthread([this](std::stop_token st) { run(st); });
  }

  void addAwaiter(Awaiter &&awaiter) { m_awaiters.push(std::move(awaiter)); }

  auto id() { return m_thread.get_id(); }

  Awaitable operator co_await() { return {*this}; }

private:
  void run(std::stop_token st) {
    while (!st.stop_requested()) {
      m_awaiters.waitForAnElement(st);

      auto awaiter = m_awaiters.pop();

      if (awaiter)
        awaiter->resume();
    }
  }

There is nothing complicated, the run methods just wait for an element, pop awaiters, execute them if they are valid and that’s all.

The co_await operator will just push the coroutine_handle to the thread thanks to the Awaitable object.

  struct Awaitable {
    Thread &thread;
    bool await_ready() { return false; }

    void await_suspend(std::coroutine_handle<> handle) {
      thread.addAwaiter({handle});
    }

    void await_resume() {}
  };

Using this thread

We schedule the operations thanks to the co_await operator.
Here is an example, the task is a basic promise that never suspends. It means that the coroutine frame is destroyed at the end of the function.

struct task {
  struct promise_type {
    task get_return_object() { return {}; }
    std::suspend_never initial_suspend() noexcept { return {}; }
    std::suspend_never final_suspend() noexcept { return {}; }
    void return_void() {}
    void unhandled_exception() noexcept {}

    ~promise_type() {}
  };

  ~task() {}
};

std::atomic_int x = 0;

std::atomic_int done = 0;

task f(Thread &thread1, Thread &thread2) {
  co_await thread1;
  ++x;

  co_await thread2;
  ++x;
  ++done;
}

The operation behind the first co_await runs on the first thread, the operation behind the second co_await runs on the second thread. Really simple.

Conclusion

We finished the first article about creating a thread pool using coroutines. We introduced some utility classes and designed a concurrent queue. If you want to try, you can find a full code here.

Thanks to  Nir Friedman to help me design mutex and condition variable in a better way :).

Comments

One response to “Thread pool with coroutines: Threads (1/3)”

  1. benedetto Avatar
    benedetto

    I don’t see code for the ThreadPool class.

Leave a Reply