欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 房产 > 家装 > 【C/C++】现代C++线程池:从入门到生产级实现

【C/C++】现代C++线程池:从入门到生产级实现

2025/5/21 8:30:26 来源:https://blog.csdn.net/YZJincsdn/article/details/148100627  浏览:    关键词:【C/C++】现代C++线程池:从入门到生产级实现

文章目录

  • 现代C++线程池:从入门到生产级实现
    • 🧠 What Is a Thread Pool?
    • 🧩 Why Use a Thread Pool?
    • 🔰 Part 1: Basic Thread Pool (Beginner)
      • 🔧 Minimal Working Code:
      • ✅ Usage:
    • 🧑‍🔬 Part 2: Improving It (Intermediate)
      • 🧵 Add Return Values with `std::future`
    • ⚙️ Part 3: Production-Grade Features (Expert)
      • ✅ Features to Add:
    • 🧵 Part 4: C++20/23 Style Thread Pool
    • 📚 Libraries You Should Know
    • 🧭 Summary

现代C++线程池:从入门到生产级实现

Introduction to thread_pool in modern C++, guiding you through the core ideas and gradually moving toward production-quality implementations. The content is designed to help you deeply understand how thread pools work, and how to write your own using C++17/20/23.


🧠 What Is a Thread Pool?

A thread pool is a collection of pre-spawned threads that wait for tasks to execute. Instead of creating a thread for every task (which is expensive), you reuse a fixed number of threads, each pulling tasks from a task queue.


🧩 Why Use a Thread Pool?

  • ✅ Avoid the overhead of frequent thread creation/destruction.
  • ✅ Reuse a fixed number of threads.
  • ✅ Efficient for high-throughput or I/O-bound systems.
  • ✅ Works well with producer-consumer or event-driven designs.

🔰 Part 1: Basic Thread Pool (Beginner)

A very basic thread pool in C++ using:

  • std::thread
  • std::mutex
  • std::condition_variable
  • std::function
  • std::queue

🔧 Minimal Working Code:

#include <iostream>
#include <thread>
#include <vector>
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <atomic>class ThreadPool {
public:ThreadPool(size_t num_threads);~ThreadPool();void enqueue(std::function<void()> task);private:std::vector<std::thread> workers;std::queue<std::function<void()>> tasks;std::mutex queue_mutex;std::condition_variable condition;std::atomic<bool> stop;
};ThreadPool::ThreadPool(size_t num_threads) : stop(false) {for (size_t i = 0; i < num_threads; ++i) {workers.emplace_back([this]() {while (true) {std::function<void()> task;{std::unique_lock<std::mutex> lock(this->queue_mutex);this->condition.wait(lock, [this]() {return this->stop || !this->tasks.empty();});if (this->stop && this->tasks.empty())return;task = std::move(this->tasks.front());this->tasks.pop();}task(); // run the task}});}
}void ThreadPool::enqueue(std::function<void()> task) {{std::lock_guard<std::mutex> lock(queue_mutex);tasks.push(std::move(task));}condition.notify_one();
}ThreadPool::~ThreadPool() {stop = true;condition.notify_all();for (std::thread &worker : workers)worker.join();
}

✅ Usage:

int main() {ThreadPool pool(4);for (int i = 0; i < 10; ++i) {pool.enqueue([i]() {std::cout << "Running task " << i << " on thread " << std::this_thread::get_id() << "\n";});}std::this_thread::sleep_for(std::chrono::seconds(1));return 0;
}

🧑‍🔬 Part 2: Improving It (Intermediate)

🧵 Add Return Values with std::future

Change enqueue() to return a std::future<T> for each task.

template<class F, class... Args>
auto enqueue(F&& f, Args&&... args)-> std::future<typename std::invoke_result_t<F, Args...>> {using return_type = typename std::invoke_result_t<F, Args...>;auto task = std::make_shared<std::packaged_task<return_type()>>(std::bind(std::forward<F>(f), std::forward<Args>(args)...));std::future<return_type> res = task->get_future();{std::lock_guard<std::mutex> lock(queue_mutex);tasks.emplace([task]() { (*task)(); });}condition.notify_one();return res;
}

Now you can write:

auto future = pool.enqueue([]() {return 42;
});
std::cout << "Result: " << future.get() << "\n";

⚙️ Part 3: Production-Grade Features (Expert)

✅ Features to Add:

FeatureDescription
Dynamic thread resizingIncrease/decrease thread count
Task prioritizationUse std::priority_queue
Shutdown optionsGraceful (drain tasks) vs Immediate
Exception handlingCatch exceptions in tasks
Thread affinity / namingSet thread names or pin to cores
Work stealingFor maximum throughput
Thread-local storageUse thread_local for caches
Integration with coroutines (C++20)Schedule coroutines using the pool

🧵 Part 4: C++20/23 Style Thread Pool

For advanced users, consider using:

  • std::jthread (C++20)
  • std::stop_token
  • std::barrier or std::latch
  • Coroutines (co_await, std::suspend_always)
  • execution::scheduler (C++23 proposal)

Example for C++20 cooperative cancellation:

void worker(std::stop_token stop_token) {while (!stop_token.stop_requested()) {// ...}
}std::jthread t(worker); // can be stopped cleanly

📚 Libraries You Should Know

If you prefer using proven libraries:

LibraryLinkNotes
CTPLEasy-to-use thread pool
BS::thread_poolHeader-only, fast
Boost::asioHeavy but feature-rich
libunifexAdvanced async patterns
follyFacebook’s production async primitives

🧭 Summary

LevelKey Concepts
Beginnerstd::thread, mutex, condition variable, basic queue
Intermediatefutures, exception handling, RAII, std::function, shared task management
Expertstd::jthread, coroutines, scheduling policies, custom allocators, task stealing

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词