Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XPU] reduce_xxx and broadcast_xxx use int64_t shape #71361

Conversation

dynamicheart
Copy link
Contributor

@dynamicheart dynamicheart commented Mar 3, 2025

PR Category

Custom Device

PR Types

Bug fixes

Description

reduce_xxx and broadcast_xxx use int64_t shape to prevent overflow

example:

import paddle

a = paddle.empty((3 * 1024 * 1024 * 1024,)).to("bool").any().item()
print(a)
>XPUAPI_DEBUG=0x1 python test.py
gtest_cast<float, bool>(api::kXPU3, "GM", "GM", 3221225472, 1024);
gtest_reduce_any<bool>(api::kXPU3, "GM", "GM", {-1073741824}, {0}, 1024);
[INVALID-SHAPE]{-1073741824}[src/wrapper/math_reduce_op.cpp:1379]
Traceback (most recent call last):
  File "/workspace/users/tmp_paddle/Paddle/test.py", line 3, in <module>
    a = paddle.empty((3 * 1024 * 1024 * 1024,)).to("bool").any().item()
  File "/usr/local/lib/python3.10/dist-packages/paddle/tensor/math.py", line 5144, in any
    return _C_ops.any(x, axis, keepdim)
OSError: (External) reduce_any XDNN Error, XDNN_INVALID_PARAM  (at /workspace/users/Paddle/paddle/phi/kernels/xpu/reduce_any_kernel.cc:46)

after fixed:

> XPUAPI_DEBUG=0x1 python test.py
gtest_cast<float, bool>(api::kXPU3, "GM", "GM", 3221225472, 1024);
gtest_reduce_any<bool>(api::kXPU3, "GM", "GM", {3221225472}, {0}, 1024);
    gtest_cast<int8_t, float>(api::kXPU3, "GM", "GM", 3221225472, 1024);
    gtest_cast<float, int8_t>(api::kXPU3, "GM", "GM", 1, 1024);
False

Copy link

paddle-bot bot commented Mar 3, 2025

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the XPU label Mar 3, 2025
@skywalker2012
Copy link
Contributor

LGTM

@@ -40,8 +40,8 @@ struct SumFunctor {
ctx,
reinterpret_cast<const XPUType*>(x),
reinterpret_cast<XPUType*>(y),
xdims,
reduce_dims);
std::vector<int>(xdims.begin(), xdims.end()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

43,44是否要改成int64_t?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是xpu::plugin,之前的一套实验性算子接口,没有std::vector<int64_t>的参数形式,我们一般也不会编译也不会开这个编译选项PADDLE_WITH_XPU_PLUGIN,所以就保持其原状即可

Copy link
Contributor

@yongqiangma yongqiangma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yongqiangma yongqiangma merged commit c13d82b into PaddlePaddle:develop Mar 5, 2025
33 checks passed
Enigmatisms pushed a commit to Enigmatisms/Paddle that referenced this pull request Mar 6, 2025
dynamicheart added a commit to dynamicheart/Paddle that referenced this pull request Mar 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants