Skip to content

Commit 09a35f5

Browse files
committed
1 parent 4a7bb88 commit 09a35f5

File tree

21 files changed

+2
-2311
lines changed

21 files changed

+2
-2311
lines changed
Lines changed: 1 addition & 161 deletions
Original file line numberDiff line numberDiff line change
@@ -1,163 +1,3 @@
11
# Inference scripts for BLOOM
22

3-
## BLOOM Inference solutions
4-
5-
Here are some benchmark resuls on JeanZay's 8x80GB A100 node w/ 512GB of CPU memory:
6-
7-
All benchmarks are doing greedy generation of 100 token outputs:
8-
```
9-
Generate args {'max_length': 100, 'do_sample': False}
10-
```
11-
The input prompt is comprised of just a few tokens.
12-
13-
Throughput in msecs on 8x80GB gpus:
14-
15-
| project \ bs | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 |
16-
| :---------------- | :----- | :---- | :---- | :---- | :--- | :--- | :--- | :--- |
17-
| accelerate bf16 | 230.38 | 31.78 | 17.84 | 10.89 | oom | | | |
18-
| accelerate int8 | 286.56 | 40.92 | 22.65 | 13.27 | oom | | | |
19-
| ds-inference fp16 | 44.02 | 5.70 | 3.01 | 1.68 | 1.00 | 0.69 | oom | |
20-
| ds-inference int8 | 89.09 | 11.44 | 5.88 | 3.09 | 1.71 | 1.02 | 0.71 | oom |
21-
| ds-zero bf16 | 283 | 34.88 | oom | | | | | |
22-
23-
note: Since Deepspeed-ZeRO can process multiple generate streams in parallel its throughput can be further divided by 8 or 16, depending on whether 8 or 16 gpus were used during the generate. and, of course, it means that it can process a bs of 64 in the case of 8x80 A100 (the table above).
24-
25-
Start to ready to generate in secs (mainly loading and data preparation time):
26-
27-
| project | |
28-
| :---------------------- | :--- |
29-
| accelerate | 121 |
30-
| ds-inference shard-int8 | 61 |
31-
| ds-inference shard-fp16 | 60 |
32-
| ds-inference unsharded | 662 |
33-
| ds-zero | 462 |
34-
35-
Now let's look at the power of quantized int8-based models provided by Deepspeed-Inference and BitsNBytes, as it requires only half the original GPU memory of inference in bfloat16 or float16.
36-
37-
Throughput in msecs 4x80GB A100:
38-
39-
| project \ bs | 1 | 8 | 16 | 32 | 64 | 128 |
40-
| :---------------- | :----- | :---- | :---- | :---- | :--- | :--- |
41-
| accelerate int8 | 284.15 | 40.14 | 21.97 | oom | | |
42-
| ds-inference int8 | 156.51 | 20.11 | 10.38 | 5.50 | 2.96 | oom |
43-
44-
To get the benchmark results simply add `--benchmark` to any of these 3 scripts discussed below.
45-
46-
47-
## Deepspeed-Inference
48-
49-
Deepspeed-Inference uses Tensor-Parallelism and efficient fused CUDA kernels:
50-
https://www.deepspeed.ai/tutorials/inference-tutorial/
51-
52-
### Setup
53-
54-
```
55-
pip install deepspeed>=0.7.3
56-
```
57-
58-
### Run
59-
60-
1. the fastest approach is to use a tp-pre-sharded checkpoint that takes only ~1min to load, as compared to 10min for non-presharded bloom checkpoint
61-
62-
63-
```
64-
deepspeed --num_gpus 8 scripts/bloom-inference-scripts/bloom-ds-inference.py --name microsoft/bloom-deepspeed-inference-fp16
65-
```
66-
67-
1a.
68-
if you want to run the original bloom checkpoint, which once loaded will run at the same throughput as the previous solution, but the loading will take 10-20min:
69-
70-
```
71-
deepspeed --num_gpus 8 scripts/bloom-inference-scripts/bloom-ds-inference.py --name bigscience/bloom
72-
```
73-
74-
2a. The 8bit quantized version requires you to have only half the GPU memory of the normal half precision version:
75-
76-
77-
```
78-
deepspeed --num_gpus 8 scripts/bloom-inference-scripts/bloom-ds-inference.py --name microsoft/bloom-deepspeed-inference-int8 --dtype int8
79-
```
80-
81-
Here we used `microsoft/bloom-deepspeed-inference-int8` and also told the script to run in `int8`.
82-
83-
And of course, just 4x80GB A100 gpus is now sufficient:
84-
85-
```
86-
deepspeed --num_gpus 4 scripts/bloom-inference-scripts/bloom-ds-inference.py --name microsoft/bloom-deepspeed-inference-int8 --dtype int8
87-
```
88-
89-
90-
91-
## HF Accelerate
92-
93-
HF Accelerate can use naive Pipeline Parallelism to load a huge model over multiple GPUs:
94-
https://github.com/huggingface/accelerate
95-
96-
### Setup
97-
98-
```
99-
pip install transformers>=4.21.3 accelerate>=0.12.0
100-
```
101-
102-
103-
### Run
104-
105-
106-
```
107-
python scripts/bloom-inference-scripts/bloom-accelerate-inference.py --name bigscience/bloom --batch_size 1 --benchmark 2>&1 | tee bloom-ds-zero-inference_bs=1.txt
108-
```
109-
110-
To activate the 8bit quantized solution first install `bitsnbytes`:
111-
112-
```
113-
pip install bitsandbytes
114-
```
115-
116-
and then add `--dtype int8` to the previous command line:
117-
118-
```
119-
python scripts/bloom-inference-scripts/bloom-accelerate-inference.py --name bigscience/bloom --dtype int8 --batch_size 1 --benchmark 2>&1 | tee bloom-int8-accelerate-inference_bs=4.txt
120-
```
121-
122-
if you have more that 4 GPUs you can tell it to use only 4 with:
123-
```
124-
CUDA_VISIBLE_DEVICES=0,1,2,3 python scripts/bloom-inference-scripts/bloom-accelerate-inference.py --name bigscience/bloom --dtype int8 --batch_size 1 --benchmark 2>&1 | tee bloom-int8-accelerate-inference_bs=4.txt
125-
```
126-
127-
128-
## Deepspeed ZeRO-Inference
129-
130-
https://www.deepspeed.ai/tutorials/zero/
131-
132-
### Setup
133-
134-
```
135-
pip install deepspeed
136-
```
137-
138-
139-
### Run
140-
141-
Note that the script currently runs the same inputs on all GPUs, but you can run a different stream on each GPU, and get `n_gpu` times faster throughput. You can't do that with Deepspeed-Inference.
142-
143-
144-
```
145-
deepspeed --num_gpus 8 scripts/bloom-inference-scripts/bloom-ds-zero-inference.py --name bigscience/bloom --batch_size 1 --benchmark 2>&1 | tee bloom-ds-zero-inference_bs=1.txt
146-
```
147-
148-
Please remember that with ZeRO the user can generate multiple unique streams at the same time - and thus the overall performance should be throughput in secs/token divided by number of participating gpus - so 8x to 16x faster depending on whether 8 or 16 gpus were used!
149-
150-
You can also try the offloading solutions with just one small GPU, which will take a long time to run, but if you don't have 8 huge GPUs this is as good as it gets.
151-
152-
153-
CPU-Offload (1x gpus):
154-
```
155-
deepspeed --num_gpus 1 scripts/bloom-inference-scripts/bloom-ds-zero-inference.py --name bigscience/bloom --batch_size 8 --cpu_offload --benchmark 2>&1 | tee bloom-ds-zero-inference-cpu_offload_bs=8.txt
156-
```
157-
158-
NVMe-Offload (1x gpus):
159-
```
160-
deepspeed --num_gpus 1 scripts/bloom-inference-scripts/bloom-ds-zero-inference.py --name bigscience/bloom --batch_size 8 --nvme_offload_path=/path/to/nvme_offload --benchmark 2>&1 | tee bloom-ds-zero-inference-nvme_offload_bs=8.txt
161-
```
162-
163-
make sure to adjust `/path/to/nvme_offload` to somewhere you have ~400GB of free memory on a fast NVMe drive.
3+
Moved to https://github.com/huggingface/transformers-bloom-inference/tree/main/bloom-inference-scripts

scripts/bloom-inference-scripts/bloom-accelerate-inference.py

Lines changed: 0 additions & 202 deletions
This file was deleted.

0 commit comments

Comments
 (0)