Is there a known issue with ConvolutionBackpropData performance since 2023.3? #29528
Unanswered
FingerSling
asked this question in
Q&A
Replies: 1 comment
-
Further to the above, I enabled detailed logging for OneDNN, and for 2023.3 I don't see any DeConvol operation for OneDNN. For 2024.6, OneDNN does report DeConvol. So did 2023.3 not use OneDNN for DeConvol? If so, the 2023.3 operation seems to have been much more efficient than the 2024.6 implementation in OneDNN. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We've been trying OpenVINO for a number of custom networks. With 2023.3 we get signficantly better performance than later versions. If I use the benchmark_app and enable detailed reports, I see a huge difference in performance for line items referencing ConvolutionBackpropData. Massive being in the order of 7-8x slower in later releases. For example, on 2023.3 the operation may take ~800ms, but on 2024.6 is takes ~6900ms.
Is this a known issue?
2023.3 ConvolutionBackpropData_645 | EXECUTED | ConvolutionBackpropData | jit_gemm_f32 | 824.889 | 824.889
2024.6 ConvolutionBackpropData_645 | EXECUTED | ConvolutionBackpropData | jit_gemm_f32 | 6864.938 | 6864.938
Note: in general, most operations are slightly faster in 2024.6, this is the primary outlier.
Beta Was this translation helpful? Give feedback.
All reactions