How to implement a thread pool
Thread pooling: a thread usage pattern. Too many threads introduce scheduling overhead, which in turn affects cache locality and overall performance. Instead, a thread pool maintains multiple threads waiting for a supervising manager to assign tasks that can be executed concurrently. This avoids the cost of creating and destroying threads while processing short time tasks. Thread pools not only ensure full utilization of the kernel, but also prevent over-scheduling. The number of available threads should depend on the number of available concurrent processors, processor cores, memory, network sockets, etc. For example, for computationally intensive tasks, the number of threads is generally taken as the number of cpu’s +2. Too many threads can lead to additional thread switching overhead.
How to define the thread pool Pool? First of all, the maximum number of threads should definitely be used as a property of the thread pool, and the specified threads should be created when the Pool is new.
Thread pool Pool
Using execute
to execute the task, F: FnOnce() + 'static + Send
is the trait that needs to be satisfied to execute the thread using thread::spawn, which means that F is a closure function that can be executed in the thread.
Another natural thing to think about is adding an array of threads to the Pool, which is used to execute tasks. For example, Vec<Thread>
balabala, where a thread is a living entity that is constantly receiving tasks and executing them.
You can think of it as a Worker in a thread that constantly performs tasks and executes them.
How to send tasks to workers? mpsc(multi producer single consumer) can meet our needs, let (tx, rx) = mpsc::channel()
can get a pair of senders and receivers.
The Pool sends tasks to multiple workers for consumption through the channel.
The receiver side of the channel needs to be safely shared between multiple threads, so it needs to be wrapped in Arc<Mutex::<T>>
, i.e., a lock to resolve concurrency conflicts.
The full definition of Pool
It’s time to define the Message we want to send to the Worker.
Define the following enumeration values
Job is a closure function to be sent to the Worker for execution. Here ByeBye is used to notify the Worker that it can terminate the current execution and exit the thread.
Only the specific logic for implementing Worker and Pool remains.
Worker’s implementation.
|
|
let message = receiver.lock().unwrap().recv().unwrap(); Here get the lock and then get the message body from receiver, then let message end the life cycle of rust will automatically release the lock.
But if written as
The entire bracket after the while let is a scope, and the lock will be released only after this scope ends, which is longer than the let message above. The rust mutex lock does not have a corresponding unlock method, and is managed by the life cycle of the mutex.
We implement the Drop
trait for Pool to automatically suspend the execution of the worker thread when the Pool is destroyed.
The drop method uses two loops inside , instead of doing both things in one loop?
This hides a trap that can cause deadlocks, such as two workers, iterating through all workers in a single loop, and then calling join directly after sending the termination message to the channel. We expect the first worker to receive the message and wait for him to finish execution. When the second worker gets the message and the first worker does not, the next join will block and cause a deadlock.
Notice that the worker is wrapped in an Option and there are two points to note here
- t.join needs to take ownership of t.
- in our case, self.workers can only be iterated over by the for loop as a reference.
Here consider letting Worker hold Option<JoinHandle<()>>
and subsequently move the value of the Some variant out by calling the take method on the Option and leaving the None variant in its original position.
In other words, let the running worker hold the Some variant, and when you clean up the worker, you can use None to replace Some, thus leaving the worker without a thread to run on.
Summary of key points
- Mutex relies on lifecycle management for lock release, so you need to pay attention to whether you are overdue for a lock when using it.
Vec<Option<T>>
can solve the scenario where T ownership is needed in some cases.
Full Code
|
|