Skip to content

Commit 208b922

Browse files
retonympytorchmergebot
authored andcommitted
[Intel GPU] Remove special dispatch logic for xpu in adaptive_avg_pooling (pytorch#132217)
We now align the dispatch logic for XPU with CUDA in the adaptive average pooling operation. Pull Request resolved: pytorch#132217 Approved by: https://github.com/EikanWang, https://github.com/atalman, https://github.com/albanD, https://github.com/malfet
1 parent e6bf171 commit 208b922

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

aten/src/ATen/native/AdaptiveAveragePooling.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ namespace {
117117
return at::mkldnn_adaptive_avg_pool2d(input, C10_AS_INTARRAYREF_SLOW(output_size));
118118
}
119119

120-
if (!input.is_quantized() && output_size[0] == 1 && output_size[1] == 1 && !input.is_xpu()) {
120+
if (!input.is_quantized() && output_size[0] == 1 && output_size[1] == 1) {
121121
// in this case, adaptive pooling is just computing mean over hw
122122
// dimensions, which can be done more efficiently
123123
#if defined(C10_MOBILE) && defined(USE_XNNPACK)

aten/src/ATen/native/AdaptiveAveragePooling3d.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -313,7 +313,7 @@ Tensor adaptive_avg_pool3d_symint(Tensor const& input, SymIntArrayRef output_siz
313313
"adaptive_avg_pool3d: elements of output_size must be greater than or equal to 0 ",
314314
"but received {", output_size[0], ", ", output_size[1], ",", output_size[2], "}");
315315

316-
if (output_size[0] == 1 && output_size[1] == 1 && output_size[2] == 1 && !input.is_xpu()) {
316+
if (output_size[0] == 1 && output_size[1] == 1 && output_size[2] == 1) {
317317
// in this case, adaptive pooling is just computing mean over hw
318318
// dimensions, which can be done more efficiently
319319
Tensor out = input.mean({-1, -2, -3}, /* keepdim = */ true);

0 commit comments

Comments
 (0)