-
Notifications
You must be signed in to change notification settings - Fork 292
Add Qwen3 Moe #2260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add Qwen3 Moe #2260
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Took an initial pass. Let's try to clean up the config and state passing.
No passing an index down the layer stack, plus data structures that apply to the whole layer stack.
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the Qwen3 MoE model. The implementation looks solid, covering the backbone, attention, decoder, tokenizer, and conversion scripts. I've identified several high-severity issues related to incomplete get_config
methods in various new layers, which will prevent model serialization from working correctly. There are also some medium-severity issues like unused parameters and a critical issue in the checkpoint conversion test script where an incorrect preprocessor is used. I've provided suggestions to fix these issues. Once addressed, the PR should be in great shape.
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for the Qwen3 MoE model, including its backbone, causal language model, tokenizer, and conversion scripts.
I've identified a few issues that need attention:
- A
critical
bug inQwen3MoeAttention.get_config()
that will cause anAttributeError
. - A couple of
high
severity issues where invalid parameters are used, which will lead toTypeError
exceptions. - Some
medium
severity issues in the test scripts and opportunities for code cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, added few small comments.
Can you provide a colab demo with numerics verification and example usage code and also generate outputs matching? |
init_kwargs=self.init_kwargs, | ||
input_data=self.input_data, | ||
expected_output_shape=(2, 7, 16), | ||
run_quantization_check=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you enable this test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also add the missing test files for causal_lm_test and causal_lm_preprocessor_test
Qwen3 Moe backbone output matching with atol 1e-3!
Generate output matching wise We are doing okay here! Generated token distribution is close in space to the huggingface ones. We saw similar issue in Qwen3 base models. Random seed used - 123
Keras generated text - What is Keras? Keras is a deep learning framework that is used for building and training neural networks. It is written in Python and can run on top
Keras token output tensor([[ 3838, 374, 730, 9247, 30, 730, 9247, 374, 264, 5538,
6832, 12626, 429, 374, 1483, 369, 4752, 323, 4862, 29728,
14155, 13, 1084, 374, 5326, 304, 13027, 323, 646, 1598,
389, 1909]], dtype=torch.int32)
HF Token outputs = tensor([[ 3838, 374, 730, 9247, 30, 3555, 374, 279, 6672, 1948,
730, 9247, 323, 94986, 30, 3555, 525, 279, 22146, 315,
730, 9247, 916, 94986, 30, 730, 9247, 374, 264, 1550,
11591, 29728]])