Speculator Models
Collection
17 items
•
Updated
•
7
This is a speculator model designed for use with Qwen/Qwen3-VL-235B-A22B-Instruct, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered dataset and the train_sft split of the HuggingFaceH4/ultrachat_200k dataset.
This model should be used with the Qwen/Qwen3-VL-235B-A22B-Instruct chat template, specifically through the /chat/completions endpoint.
vllm serve Qwen/Qwen3-VL-235B-A22B-Instruct \
-tp 8 \
--speculative-config '{
"model": "RedHatAI/Qwen3-VL-235B-A22B-Instruct-speculator.eagle3",
"num_speculative_tokens": 3,
"method": "eagle3"
}'
| Use Case | Dataset | Number of Samples |
|---|---|---|
| Coding | HumanEval | 168 |
| Math Reasoning | gsm8k | 80 |
| Text Summarization | CNN/Daily Mail | 80 |
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| Coding | 1.85 | 2.49 | 2.88 | 3.13 | 3.24 |
| Math Reasoning | 1.85 | 2.54 | 2.96 | 3.25 | 3.24 |
| Text Summarization | 1.63 | 1.94 | 2.08 | 2.14 | 2.16 |
Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--rate-type sweep \
--max-seconds 600 \
--output-path "Qwen235B-HumanEval.json" \
</details>
Base model
Qwen/Qwen3-VL-235B-A22B-Instruct