-
Notifications
You must be signed in to change notification settings - Fork 31
[Quantization Format] Add functionality to infer format #441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Few code-related things, nothing pops out as wrong in implementation but deferring to others on that 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment on QuantizationScheme which specifies that None
means that the value will be inferred by infer_and_set_per_module_quantization_format
before compression?
Otherwise looks great, thanks for doing this
…almagic#441)" (neuralmagic#451) This reverts commit 141cbba.
* add format infer code * update * update * add loguru * use dense not None
Summary
Next Steps