Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add npu support for LLM.int8 forward #1534

Open
wants to merge 1 commit into
base: multi-backend-refactor
Choose a base branch
from

Conversation

SlightwindSec
Copy link

What does this PR do?

  1. LLM.int8 inference (forward-only)

    • Adds PyTorch-based int8_vectorwise_dequant (AscendC version WIP).
    • Uses fused npu_quant_matmul for NPU-optimized matmul+dequant.
  2. NF4 memory fix

    • Implements chunk-based processing to reduce memory usage and prevent OOM for large tensors.

Notes

  • Backward pass for LLM.int8 requires future AscendC kernels.

Collaborators
@ji-huazhong @Ginray @MatrixPlayer

cc @Titus-von-Koeller @matthewdouglas

@matthewdouglas matthewdouglas self-requested a review February 20, 2025 15:59
@matthewdouglas matthewdouglas self-assigned this Feb 20, 2025
@matthewdouglas matthewdouglas added the Ascend NPU Related to Ascend NPU backend label Feb 20, 2025
Comment on lines +118 to +119
colidx_tmp = torch.unique(outliers_col_idx)
colidx = colidx_tmp[colidx_tmp != -1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an optimization this can probably be avoided when threshold==0.0 and moved into the condition below.

Copy link

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@matthewdouglas
Copy link
Member

Thanks! I'll provide a few comments. I also want to remind that we're quite close on moving to torch.library and custom ops for device dispatching and will provide more info soon on porting over to that. It should be a fairly simple process!

@@ -69,7 +181,7 @@ def int8_linear_matmul(
out: Optional[torch.Tensor] = None,
dtype=torch.int32,
) -> torch.Tensor:
raise NotImplementedError
return Int8AB(A, B)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting and clever! While this does break the expected API as this isn't returning a Tensor (or performing any operations really), I can completely understand why it is done this way. I think this will be OK for right now and we'll make the interface better in this regard later on.

Comment on lines -323 to -325
# `torch.Tensor.to(<int num>)` is not supported by `torch_npu` (see this [issue](https://github.com/Ascend/pytorch/issues/16)).
if isinstance(device, int):
device = f"npu:{device}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this require a bump in the minimum torch_npu to support now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ascend NPU Related to Ascend NPU backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants