@@ -105,11 +105,102 @@ Served models can then be benchmarked using [GuideLLM](https://github.com/vllm-p
105105 </picture >
106106</p >
107107
108- ### License
108+ ### Supported Models
109+
110+ The following models are currently supported or are planned to be supported in the short term.
111+
112+ <table >
113+ <thead >
114+ <tr >
115+ <th >Verifier Architecture</th >
116+ <th >Verifier Size</th >
117+ <th >Training via Speculators</th >
118+ <th >Deployment in vLLM</th >
119+ <th >Conversion of External Checkpoints</th >
120+ </tr >
121+ </thead >
122+ <tbody >
123+ <tr >
124+ <td rowspan =" 3 " >Llama</td >
125+ <td >8B-Instruct</td >
126+ <td >EAGLE-3 ✅ | HASS ✅</td >
127+ <td >✅</td >
128+ <td ><a href =" https://huggingface.co/yuhuili/EAGLE3-LLaMA3.1-Instruct-8B " >EAGLE-3</a > ✅</td >
129+ </tr >
130+ <tr >
131+ <td >70B-Instruct</td >
132+ <td >EAGLE-3 ⏳</td >
133+ <td >✅</td >
134+ <td ><a href =" https://huggingface.co/yuhuili/EAGLE3-LLaMA3.3-Instruct-70B " >EAGLE-3</a > ✅</td >
135+ </tr >
136+ <tr >
137+ <td >DeepSeek-R1-Distill-LLama-8B</td >
138+ <td >EAGLE-3 ❌</td >
139+ <td >✅</td >
140+ <td ><a href =" https://huggingface.co/yuhuili/EAGLE3-DeepSeek-R1-Distill-LLaMA-8B " >EAGLE-3</a > ✅</td >
141+ </tr >
142+ <tr >
143+ <td rowspan =" 3 " >Qwen3</td >
144+ <td >8B</td >
145+ <td >EAGLE-3 ✅</td >
146+ <td >✅</td >
147+ <td >❌</td >
148+ </tr >
149+ <tr >
150+ <td >14B</td >
151+ <td >EAGLE-3 ❌</td >
152+ <td >✅</td >
153+ <td >❌</td >
154+ </tr >
155+ <tr >
156+ <td >32B</td >
157+ <td >EAGLE-3 ❌</td >
158+ <td >✅</td >
159+ <td >❌</td >
160+ </tr >
161+ <tr >
162+ <td rowspan =" 2 " >Qwen3 MoE</td >
163+ <td >30B-A3B</td >
164+ <td >EAGLE-3 ❌</td >
165+ <td >⏳</td >
166+ <td >❌</td >
167+ </tr >
168+ <tr >
169+ <td >235B-A22B</td >
170+ <td >EAGLE-3 ❌</td >
171+ <td >⏳</td >
172+ <td ><a href =" https://huggingface.co/nvidia/Qwen3-235B-A22B-Eagle3 " >EAGLE-3</a > ⏳</td >
173+ </tr >
174+ <tr >
175+ <td rowspan =" 2 " >Llama-4</td >
176+ <td >Scout-17B-16E-Instruct</td >
177+ <td >EAGLE-3 ❌</td >
178+ <td >⏳</td >
179+ <td >❌</td >
180+ </tr >
181+ <tr >
182+ <td >Maverick-17B-128E-Eagle3</td >
183+ <td >EAGLE-3 ❌</td >
184+ <td >⏳</td >
185+ <td ><a href =" https://huggingface.co/nvidia/Llama-4-Maverick-17B-128E-Eagle3 " >EAGLE-3</a > ⏳</td >
186+ </tr >
187+ <tr >
188+ <td >DeepSeek-R1</td >
189+ <td >DeepSeek-R1</td >
190+ <td >EAGLE-3 ❌</td >
191+ <td >⏳</td >
192+ <td ><a href =" https://huggingface.co/HArmonizedSS/HASS-DeepSeek-R1 " >HASS</a > ⏳</td >
193+ </tr >
194+ </tbody >
195+ </table >
196+
197+ ✅ = Supported, ⏳ = In Progress, ❌ = Not Yet Supported
198+
199+ ## License
109200
110201Speculators is licensed under the [ Apache License 2.0] ( https://github.com/neuralmagic/speculators/blob/main/LICENSE ) .
111202
112- ### Cite
203+ ## Cite
113204
114205If you find Speculators helpful in your research or projects, please consider citing it:
115206
0 commit comments