You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We left a comment in the `for` loop in Listing 21-14 regarding the creation of
1147
1151
threads. Here, we’ll look at how we actually create threads. The standard
@@ -1156,11 +1160,11 @@ We’ll implement this behavior by introducing a new data structure between the
1156
1160
`ThreadPool` and the threads that will manage this new behavior. We’ll call
1157
1161
this data structure *Worker*, which is a common term in pooling
1158
1162
implementations. The `Worker` picks up code that needs to be run and runs the
1159
-
code in the Worker’s thread.
1163
+
code in its thread.
1160
1164
1161
1165
Think of people working in the kitchen at a restaurant: the workers wait until
1162
1166
orders come in from customers, and then they’re responsible for taking those
1163
-
orders and fulfilling them.
1167
+
orders and filling them.
1164
1168
1165
1169
Instead of storing a vector of `JoinHandle<()>` instances in the thread pool,
1166
1170
we’ll store instances of the `Worker` struct. Each `Worker` will store a single
@@ -1179,7 +1183,7 @@ set up in this way:
1179
1183
`Worker` instance that holds the `id` and a thread spawned with an empty
1180
1184
closure.
1181
1185
1. In `ThreadPool::new`, use the `for` loop counter to generate an `id`, create
1182
-
a new `Worker` with that `id`, and store the worker in the vector.
1186
+
a new `Worker` with that `id`, and store the `Worker` in the vector.
1183
1187
1184
1188
If you’re up for a challenge, try implementing these changes on your own before
1185
1189
looking at the code in Listing 21-15.
@@ -1413,7 +1417,7 @@ src/lib.rs
1413
1417
1414
1418
```
1415
1419
use std::{
1416
-
sync::{mpsc, Arc, Mutex},
1420
+
sync::{Arc, Mutex, mpsc},
1417
1421
thread,
1418
1422
};
1419
1423
// --snip--
@@ -1608,8 +1612,9 @@ and 21-20 would be different if we were using futures instead of a closure for
1608
1612
the work to be done. What types would change? How would the method signatures be
1609
1613
different, if at all? What parts of the code would stay the same?
1610
1614
1611
-
After learning about the `while let` loop in Chapters 17 and 18, you might be
1612
-
wondering why we didn’t write the worker thread code as shown in Listing 21-21.
1615
+
After learning about the `while let` loop in Chapter 17 and Chapter 19, you
1616
+
might be wondering why we didn’t write the `Worker` thread code as shown in
1617
+
Listing 21-21.
1613
1618
1614
1619
src/lib.rs
1615
1620
@@ -1645,7 +1650,7 @@ longer than intended if we aren’t mindful of the lifetime of the
1645
1650
`MutexGuard<T>`.
1646
1651
1647
1652
The code in Listing 21-20 that uses `let job = receiver.lock().unwrap().recv().unwrap();` works because with `let`, any
1648
-
temporary values used in the expression on the righthand side of the equal
1653
+
temporary values used in the expression on the right-hand side of the equal
1649
1654
sign are immediately dropped when the `let` statement ends. However, `while let` (and `if let` and `match`) does not drop temporary values until the end of
1650
1655
the associated block. In Listing 21-21, the lock remains held for the duration
1651
1656
of the call to `job()`, meaning other `Worker` instances cannot receive jobs.
@@ -1656,7 +1661,7 @@ The code in Listing 21-20 is responding to requests asynchronously through the
1656
1661
use of a thread pool, as we intended. We get some warnings about the `workers`,
1657
1662
`id`, and `thread` fields that we’re not using in a direct way that reminds us
1658
1663
we’re not cleaning up anything. When we use the less elegant
1659
-
<kbd>ctrl</kbd>-<kbd>c</kbd> method to halt the main thread, all other threads
1664
+
<kbd>ctrl</kbd>-<kbd>C</kbd> method to halt the main thread, all other threads
1660
1665
are stopped immediately as well, even if they’re in the middle of serving a
1661
1666
request.
1662
1667
@@ -1694,9 +1699,9 @@ impl Drop for ThreadPool {
1694
1699
1695
1700
Listing 21-22: Joining each thread when the thread pool goes out of scope
1696
1701
1697
-
First, we loop through each of the thread pool `workers`. We use `&mut` for this
1702
+
First we loop through each of the thread pool `workers`. We use `&mut` for this
1698
1703
because `self` is a mutable reference, and we also need to be able to mutate
1699
-
`worker`. For each worker, we print a message saying that this particular
1704
+
`worker`. For each `worker`, we print a message saying that this particular
1700
1705
`Worker` instance is shutting down, and then we call `join` on that `Worker`
1701
1706
instance’s thread. If the call to `join` fails, we use `unwrap` to make Rust
1702
1707
panic and go into an ungraceful shutdown.
@@ -1738,14 +1743,14 @@ wouldn’t have a thread to run.
1738
1743
However, the *only* time this would come up would be when dropping the `Worker`.
1739
1744
In exchange, we’d have to deal with an `Option<thread::JoinHandle<()>>` anywhere
1740
1745
we accessed `worker.thread`. Idiomatic Rust uses `Option` quite a bit, but when
1741
-
you find yourself wrapping something you know will always be present in `Option`
1742
-
as a workaround like this, it’s a good idea to look for alternative approaches.
1743
-
They can make your code cleaner and less error-prone.
1746
+
you find yourself wrapping something you know will always be present in an
1747
+
`Option`as a workaround like this, it’s a good idea to look for alternative
1748
+
approaches to make your code cleaner and less error-prone.
1744
1749
1745
1750
In this case, a better alternative exists: the `Vec::drain` method. It accepts
1746
-
a range parameter to specify which items to remove from the `Vec`, and returns
1751
+
a range parameter to specify which items to remove from the vector and returns
1747
1752
an iterator of those items. Passing the `..` range syntax will remove *every*
1748
-
value from the `Vec`.
1753
+
value from the vector.
1749
1754
1750
1755
So we need to update the `ThreadPool``drop` implementation like this:
1751
1756
@@ -1766,17 +1771,20 @@ impl Drop for ThreadPool {
1766
1771
1767
1772
1768
1773
This resolves the compiler error and does not require any other changes to our
1769
-
code.
1774
+
code. Note that, because drop can be called when panicking, the unwrap
1775
+
could also panic and cause a double panic, which immediately crashes the
1776
+
program and ends any cleanup in progress. This is fine for an example program,
1777
+
but isn’t recommended for production code.
1770
1778
1771
1779
### Signaling to the Threads to Stop Listening for Jobs
1772
1780
1773
1781
With all the changes we’ve made, our code compiles without any warnings.
1774
1782
However, the bad news is that this code doesn’t function the way we want it to
1775
1783
yet. The key is the logic in the closures run by the threads of the `Worker`
1776
-
instances: at the moment, we call `join`, but that won’t shut down the threads
1777
-
because they `loop` forever looking for jobs. If we try to drop our`ThreadPool`
1778
-
with our current implementation of `drop`, the main thread will block forever,
1779
-
waiting for the first thread to finish.
1784
+
instances: at the moment, we call `join`, but that won’t shut down the threads,
1785
+
because they `loop` forever looking for jobs. If we try to drop our
1786
+
`ThreadPool`with our current implementation of `drop`, the main thread will
1787
+
block forever, waiting for the first thread to finish.
1780
1788
1781
1789
To fix this problem, we’ll need a change in the `ThreadPool``drop`
1782
1790
implementation and then a change in the `Worker` loop.
@@ -1828,7 +1836,7 @@ impl Drop for ThreadPool {
1828
1836
}
1829
1837
```
1830
1838
1831
-
Listing 21-23: Explicitly drop`sender` before joining the `Worker` threads
1839
+
Listing 21-23: Explicitly dropping`sender` before joining the `Worker` threads
1832
1840
1833
1841
Dropping `sender` closes the channel, which indicates no more messages will be
1834
1842
sent. When that happens, all the calls to `recv` that the `Worker` instances do
@@ -1940,7 +1948,7 @@ wait for each `Worker` thread to finish.
1940
1948
Notice one interesting aspect of this particular execution: the `ThreadPool`
1941
1949
dropped the `sender`, and before any `Worker` received an error, we tried to
1942
1950
join `Worker` 0. `Worker` 0 had not yet gotten an error from `recv`, so the main
1943
-
thread blocked waiting for `Worker` 0 to finish. In the meantime, `Worker` 3
1951
+
thread blocked, waiting for `Worker` 0 to finish. In the meantime, `Worker` 3
1944
1952
received a job and then all threads received an error. When `Worker` 0 finished,
1945
1953
the main thread waited for the rest of the `Worker` instances to finish. At that
1946
1954
point, they had all exited their loops and stopped.
0 commit comments