Skip to content

Commit 2371017

Browse files
authored
r2.7 doc update (#3656)
* r2.7 doc update * more for sq removal * sq removal and docstring correction * cont. sq removal & correct llm table format * model ID correction * update html LLM table * remove sq UTs * remove unused pkgs
1 parent a78436a commit 2371017

File tree

25 files changed

+310
-1341
lines changed

25 files changed

+310
-1341
lines changed

README.md

Lines changed: 55 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Intel® Extension for PyTorch\*
55

66
</div>
77

8-
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.6.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm) <br>
8+
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.7.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.7/examples/cpu/llm) <br>
99
**GPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main/examples/gpu/llm)<br>
1010

1111
Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
@@ -16,51 +16,60 @@ In the current technological landscape, Generative AI (GenAI) workloads and mode
1616

1717
### Optimized Model List
1818

19-
| MODEL FAMILY | MODEL NAME (Huggingface hub) | FP32 | BF16 | Static quantization INT8 | Weight only quantization INT8 | Weight only quantization INT4 |
20-
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
21-
|LLAMA| meta-llama/Llama-2-7b-hf ||||||
22-
|LLAMA| meta-llama/Llama-2-13b-hf ||||||
23-
|LLAMA| meta-llama/Llama-2-70b-hf ||||||
24-
|LLAMA| meta-llama/Meta-Llama-3-8B ||||||
25-
|LLAMA| meta-llama/Meta-Llama-3-70B ||||||
26-
|LLAMA| meta-llama/Meta-Llama-3.1-8B-Instruct ||||||
27-
|LLAMA| meta-llama/Llama-3.2-3B-Instruct ||||||
28-
|LLAMA| meta-llama/Llama-3.2-11B-Vision-Instruct ||| |||
29-
|GPT-J| EleutherAI/gpt-j-6b ||||||
30-
|GPT-NEOX| EleutherAI/gpt-neox-20b ||||||
31-
|DOLLY| databricks/dolly-v2-12b ||||||
32-
|FALCON| tiiuae/falcon-7b ||||||
33-
|FALCON| tiiuae/falcon-11b ||||||
34-
|FALCON| tiiuae/falcon-40b ||||||
35-
|OPT| facebook/opt-30b ||||||
36-
|OPT| facebook/opt-1.3b ||||||
37-
|Bloom| bigscience/bloom-1b7 ||||||
38-
|CodeGen| Salesforce/codegen-2B-multi ||||||
39-
|Baichuan| baichuan-inc/Baichuan2-7B-Chat ||||||
40-
|Baichuan| baichuan-inc/Baichuan2-13B-Chat ||||||
41-
|Baichuan| baichuan-inc/Baichuan-13B-Chat ||||||
42-
|ChatGLM| THUDM/chatglm3-6b ||||||
43-
|ChatGLM| THUDM/chatglm2-6b ||||||
44-
|GPTBigCode| bigcode/starcoder ||||||
45-
|T5| google/flan-t5-xl ||||||
46-
|MPT| mosaicml/mpt-7b ||||||
47-
|Mistral| mistralai/Mistral-7B-v0.1 ||||||
48-
|Mixtral| mistralai/Mixtral-8x7B-v0.1 ||| |||
49-
|Stablelm| stabilityai/stablelm-2-1_6b ||||||
50-
|Qwen| Qwen/Qwen-7B-Chat ||||||
51-
|Qwen| Qwen/Qwen2-7B ||||||
52-
|LLaVA| liuhaotian/llava-v1.5-7b ||| |||
53-
|GIT| microsoft/git-base ||| |||
54-
|Yuan| IEITYuan/Yuan2-102B-hf ||| || |
55-
|Phi| microsoft/phi-2 ||||||
56-
|Phi| microsoft/Phi-3-mini-4k-instruct ||||||
57-
|Phi| microsoft/Phi-3-mini-128k-instruct ||||||
58-
|Phi| microsoft/Phi-3-medium-4k-instruct ||||||
59-
|Phi| microsoft/Phi-3-medium-128k-instruct ||||||
60-
|Whisper| openai/whisper-large-v2 ||||||
61-
|Maira| microsoft/maira-2 ||| |||
62-
|Jamba| ai21labs/Jamba-v0.1 ||| |||
63-
|DeepSeek| deepseek-ai/DeepSeek-V2.5-1210 ||| |||
19+
We have supported a long list of LLMs, including the most notable open-source models
20+
like Llama series, Qwen series, Phi-3/Phi-4 series,
21+
and the phenomenal high-quality reasoning model DeepSeek-R1.
22+
23+
| MODEL FAMILY | MODEL NAME (Huggingface hub) | FP32 | BF16 | Weight only quantization INT8 | Weight only quantization INT4 |
24+
|:---:|:---:|:---:|:---:|:---:|:---:|
25+
|LLAMA| meta-llama/Llama-2-7b-hf |||||
26+
|LLAMA| meta-llama/Llama-2-13b-hf |||||
27+
|LLAMA| meta-llama/Llama-2-70b-hf |||||
28+
|LLAMA| meta-llama/Meta-Llama-3-8B |||||
29+
|LLAMA| meta-llama/Meta-Llama-3-70B |||||
30+
|LLAMA| meta-llama/Meta-Llama-3.1-8B-Instruct |||||
31+
|LLAMA| meta-llama/Llama-3.2-3B-Instruct |||||
32+
|LLAMA| meta-llama/Llama-3.2-11B-Vision-Instruct |||||
33+
|GPT-J| EleutherAI/gpt-j-6b |||||
34+
|GPT-NEOX| EleutherAI/gpt-neox-20b |||||
35+
|DOLLY| databricks/dolly-v2-12b |||||
36+
|FALCON| tiiuae/falcon-7b |||||
37+
|FALCON| tiiuae/falcon-11b |||||
38+
|FALCON| tiiuae/falcon-40b |||||
39+
|FALCON| tiiuae/Falcon3-7B-Instruct |||||
40+
|OPT| facebook/opt-30b |||||
41+
|OPT| facebook/opt-1.3b |||||
42+
|Bloom| bigscience/bloom-1b7 |||||
43+
|CodeGen| Salesforce/codegen-2B-multi |||||
44+
|Baichuan| baichuan-inc/Baichuan2-7B-Chat |||||
45+
|Baichuan| baichuan-inc/Baichuan2-13B-Chat |||||
46+
|Baichuan| baichuan-inc/Baichuan-13B-Chat |||||
47+
|ChatGLM| THUDM/chatglm3-6b |||||
48+
|ChatGLM| THUDM/chatglm2-6b |||||
49+
|GPTBigCode| bigcode/starcoder |||||
50+
|T5| google/flan-t5-xl |||||
51+
|MPT| mosaicml/mpt-7b |||||
52+
|Mistral| mistralai/Mistral-7B-v0.1 |||||
53+
|Mixtral| mistralai/Mixtral-8x7B-v0.1 |||||
54+
|Stablelm| stabilityai/stablelm-2-1_6b |||||
55+
|Qwen| Qwen/Qwen-7B-Chat |||||
56+
|Qwen| Qwen/Qwen2-7B |||||
57+
|Qwen| Qwen/Qwen2.5-7B-Instruct |||||
58+
|LLaVA| liuhaotian/llava-v1.5-7b |||||
59+
|GIT| microsoft/git-base |||||
60+
|Yuan| IEITYuan/Yuan2-102B-hf |||| |
61+
|Phi| microsoft/phi-2 |||||
62+
|Phi| microsoft/Phi-3-mini-4k-instruct |||||
63+
|Phi| microsoft/Phi-3-mini-128k-instruct |||||
64+
|Phi| microsoft/Phi-3-medium-4k-instruct |||||
65+
|Phi| microsoft/Phi-3-medium-128k-instruct |||||
66+
|Phi| microsoft/Phi-4-mini-instruct |||| |
67+
|Phi| microsoft/Phi-4-multimodal-instruct |||| |
68+
|Whisper| openai/whisper-large-v2 |||||
69+
|Maira| microsoft/maira-2 |||||
70+
|Jamba| ai21labs/Jamba-v0.1 |||||
71+
|DeepSeek| deepseek-ai/DeepSeek-V2.5-1210 |||||
72+
|DeepSeek| meituan/DeepSeek-R1-Channel-INT8 | | || |
6473

6574
*Note*: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and customized linear kernels.
6675
We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.

docker/Dockerfile.prebuilt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,11 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 100
3535

3636
WORKDIR /root
3737

38-
ARG IPEX_VERSION=2.6.0
39-
ARG TORCHCCL_VERSION=2.6.0
40-
ARG PYTORCH_VERSION=2.6.0
41-
ARG TORCHAUDIO_VERSION=2.6.0
42-
ARG TORCHVISION_VERSION=0.21.0
38+
ARG IPEX_VERSION=2.7.0
39+
ARG TORCHCCL_VERSION=2.7.0
40+
ARG PYTORCH_VERSION=2.7.0
41+
ARG TORCHAUDIO_VERSION=2.7.0
42+
ARG TORCHVISION_VERSION=0.22.0
4343
RUN python -m venv venv && \
4444
. ./venv/bin/activate && \
4545
python -m pip --no-cache-dir install --upgrade \

docker/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,25 @@
1010

1111
```console
1212
$ cd $DOCKERFILE_DIR
13-
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:main .
13+
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:2.7.0 .
1414
```
1515

1616
Run the following commands to build a `conda` based container with Intel® Extension for PyTorch\* compiled from source:
1717

1818
```console
1919
$ git clone https://github.com/intel/intel-extension-for-pytorch.git
2020
$ cd intel-extension-for-pytorch
21+
$ git checkout v2.7.0+cpu
2122
$ git submodule sync
2223
$ git submodule update --init --recursive
23-
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:main .
24+
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:2.7.0 .
2425
```
2526

2627
* Sanity Test
2728

2829
When a docker image is built out, Run the command below to launch into a container:
2930
```console
30-
$ docker run --rm -it intel-extension-for-pytorch:main bash
31+
$ docker run --rm -it intel-extension-for-pytorch:2.7.0 bash
3132
```
3233

3334
Then run the command below inside the container to verify correct installation.

docs/_static/htmls/tbl_deepspeed.html

Lines changed: 32 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -87,95 +87,107 @@
8787
<td><p style="text-align: center; vertical-align: middle;"></p></td>
8888
</tr>
8989
<tr class="row-odd">
90+
<td><p>FALCON</p></td>
91+
<td><p>tiiuae/Falcon3-7B-Instruct</p></td>
92+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
93+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
94+
</tr>
95+
<tr class="row-even">
9096
<td><p>OPT</p></td>
9197
<td><p>facebook/opt-30b</p></td>
9298
<td><p style="text-align: center; vertical-align: middle;"></p></td>
9399
<td><p style="text-align: center; vertical-align: middle;"></p></td>
94100
</tr>
95-
<tr class="row-even">
101+
<tr class="row-odd">
96102
<td><p>OPT</p></td>
97103
<td><p>facebook/opt-1.3b</p></td>
98104
<td><p style="text-align: center; vertical-align: middle;"></p></td>
99105
<td><p style="text-align: center; vertical-align: middle;"></p></td>
100106
</tr>
101-
<tr class="row-odd">
107+
<tr class="row-even">
102108
<td><p>Bloom</p></td>
103109
<td><p>bigscience/bloom-1b7</p></td>
104110
<td><p style="text-align: center; vertical-align: middle;"></p></td>
105111
<td><p style="text-align: center; vertical-align: middle;"></p></td>
106112
</tr>
107-
<tr class="row-even">
113+
<tr class="row-odd">
108114
<td><p>CodeGen</p></td>
109115
<td><p>Salesforce/codegen-2B-multi</p></td>
110116
<td><p style="text-align: center; vertical-align: middle;"></p></td>
111117
<td><p style="text-align: center; vertical-align: middle;"></p></td>
112118
</tr>
113-
<tr class="row-odd">
119+
<tr class="row-even">
114120
<td><p>Baichuan</p></td>
115121
<td><p>baichuan-inc/Baichuan2-7B-Chat</p></td>
116122
<td><p style="text-align: center; vertical-align: middle;"></p></td>
117123
<td><p style="text-align: center; vertical-align: middle;"></p></td>
118124
</tr>
119-
<tr class="row-even">
125+
<tr class="row-odd">
120126
<td><p>Baichuan</p></td>
121127
<td><p>baichuan-inc/Baichuan2-13B-Chat</p></td>
122128
<td><p style="text-align: center; vertical-align: middle;"></p></td>
123129
<td><p style="text-align: center; vertical-align: middle;"></p></td>
124130
</tr>
125-
<tr class="row-odd">
131+
<tr class="row-even">
126132
<td><p>Baichuan</p></td>
127133
<td><p>baichuan-inc/Baichuan-13B-Chat</p></td>
128134
<td><p style="text-align: center; vertical-align: middle;"></p></td>
129135
<td><p style="text-align: center; vertical-align: middle;"></p></td>
130136
</tr>
131-
<tr class="row-even">
137+
<tr class="row-odd">
132138
<td><p>GPTBigCode</p></td>
133139
<td><p>bigcode/starcoder</p></td>
134140
<td><p style="text-align: center; vertical-align: middle;"></p></td>
135141
<td><p style="text-align: center; vertical-align: middle;"></p></td>
136142
</tr>
137-
<tr class="row-odd">
143+
<tr class="row-even">
138144
<td><p>T5</p></td>
139145
<td><p>google/flan-t5-xl</p></td>
140146
<td><p style="text-align: center; vertical-align: middle;"></p></td>
141147
<td><p style="text-align: center; vertical-align: middle;"></p></td>
142148
</tr>
143-
<tr class="row-even">
149+
<tr class="row-odd">
144150
<td><p>Mistral</p></td>
145151
<td><p>mistralai/Mistral-7B-v0.1</p></td>
146152
<td><p style="text-align: center; vertical-align: middle;"></p></td>
147153
<td><p style="text-align: center; vertical-align: middle;"></p></td>
148154
</tr>
149-
<tr class="row-odd">
155+
<tr class="row-even">
150156
<td><p>Mistral</p></td>
151157
<td><p>mistralai/Mixtral-8x7B-v0.1</p></td>
152158
<td><p style="text-align: center; vertical-align: middle;"></p></td>
153159
<td><p style="text-align: center; vertical-align: middle;"></p></td>
154160
</tr>
155-
<tr class="row-even">
161+
<tr class="row-odd">
156162
<td><p>MPT</p></td>
157163
<td><p>mosaicml/mpt-7b</p></td>
158164
<td><p style="text-align: center; vertical-align: middle;"></p></td>
159165
<td><p style="text-align: center; vertical-align: middle;"></p></td>
160166
</tr>
161-
<tr class="row-odd">
167+
<tr class="row-even">
162168
<td><p>Stablelm</p></td>
163169
<td><p>stabilityai/stablelm-2-1_6b</p></td>
164170
<td><p style="text-align: center; vertical-align: middle;"></p></td>
165171
<td><p style="text-align: center; vertical-align: middle;"></p></td>
166172
</tr>
167-
<tr class="row-even">
173+
<tr class="row-odd">
168174
<td><p>Qwen</p></td>
169175
<td><p>Qwen/Qwen-7B-Chat</p></td>
170176
<td><p style="text-align: center; vertical-align: middle;"></p></td>
171177
<td><p style="text-align: center; vertical-align: middle;"></p></td>
172178
</tr>
173-
<tr class="row-odd">
179+
<tr class="row-even">
174180
<td><p>Qwen</p></td>
175181
<td><p>Qwen/Qwen2-7B</p></td>
176182
<td><p style="text-align: center; vertical-align: middle;"></p></td>
177183
<td><p style="text-align: center; vertical-align: middle;"></p></td>
178184
</tr>
185+
<tr class="row-odd">
186+
<td><p>Qwen</p></td>
187+
<td><p>Qwen/Qwen2.5-7B-Instruct</p></td>
188+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
189+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
190+
</tr>
179191
<tr class="row-even">
180192
<td><p>GIT</p></td>
181193
<td><p>microsoft/git-base</p></td>
@@ -224,5 +236,11 @@
224236
<td><p style="text-align: center; vertical-align: middle;"></p></td>
225237
<td><p style="text-align: center; vertical-align: middle;"></p></td>
226238
</tr>
239+
<tr class="row-even">
240+
<td><p>DeepSeek</p></td>
241+
<td><p>meituan/DeepSeek-R1-Channel-INT8</p></td>
242+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
243+
<td><p style="text-align: center; vertical-align: middle;"></p></td>
244+
</tr>
227245
</tbody>
228246
</table>

0 commit comments

Comments
 (0)