Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weight Protective Quantization Range -119, 119 #60

Open
Jzz24 opened this issue Feb 26, 2025 · 2 comments
Open

Weight Protective Quantization Range -119, 119 #60

Jzz24 opened this issue Feb 26, 2025 · 2 comments

Comments

@Jzz24
Copy link

Jzz24 commented Feb 26, 2025

https://github.com/mit-han-lab/omniserve/blob/main/omniserve/modeling/layers/quantized_linear/w4a8_linear.py#L176, it seems we do not quantize the int8 w to range [-119, 119]? And how to caculate the s1_scale? just like the int8 quantization? but use qmin=-119, qmax=119?

@ys-2020
Copy link
Contributor

ys-2020 commented Feb 26, 2025

Hi. Thanks for your interests in QServe. The protective range 119 has already be considered and utilized during the model quantization process. When computing the s1_scale for int8 quantization, we use 119 for the scaling factor computation.

@Jzz24
Copy link
Author

Jzz24 commented Feb 28, 2025

get it, thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants