-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda.parallel: Minor perf improvements #3718
cuda.parallel: Minor perf improvements #3718
Conversation
python/cuda_parallel/cuda/parallel/experimental/_utils/protocols.py
Outdated
Show resolved
Hide resolved
🟥 CI finished in 5m 58s: Pass: 0%/1 | Total: 5m 58s | Avg: 5m 58s | Max: 5m 58s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
bf5b043
to
4dcfc6f
Compare
How are we compared to cupy now? |
🟥 CI finished in 5m 55s: Pass: 0%/1 | Total: 5m 55s | Avg: 5m 55s | Max: 5m 55s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Closer, but not there quite yet. We have ~15us of constant overhead versus CuPy's ~10us. I'll iterate on this PR until we reach parity |
btw I think you meant us (microseconds) not ms (millisecond). I feel we are pushing to the limit where Python overhead could be something to worry about. |
With the latest changes which rip out all the validation checks we do between the call to
We are absolutely there already - this PR is trying to minimize the number of Python operations we're doing in the |
/ok to test |
🟥 CI finished in 6m 06s: Pass: 0%/1 | Total: 6m 06s | Avg: 6m 06s | Max: 6m 06s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
/ok to test |
🟥 CI finished in 6m 05s: Pass: 0%/1 | Total: 6m 05s | Avg: 6m 05s | Max: 6m 05s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
/ok to test |
🟩 CI finished in 33m 23s: Pass: 100%/1 | Total: 33m 23s | Avg: 33m 23s | Max: 33m 23s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
0f404e7
to
41f652d
Compare
/ok to test |
🟩 CI finished in 28m 40s: Pass: 100%/1 | Total: 28m 40s | Avg: 28m 40s | Max: 28m 40s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
8bac88d
to
2d2af2c
Compare
ed71555
to
5d946c1
Compare
5d946c1
to
0429181
Compare
In the near future we should consider establishing an API contract for plan building and plan execution (#2429 (comment)).
Let's have a separate issue to track this. Thinking about this more we should try to make the current (low-level) interface look more like a 1:1 binding to the bare C++ one. This is what we do for |
🟥 CI finished in 5m 44s: Pass: 0%/1 | Total: 5m 44s | Avg: 5m 44s | Max: 5m 44s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
645d11c
to
0429181
Compare
🟩 CI finished in 29m 45s: Pass: 100%/1 | Total: 29m 45s | Avg: 29m 45s | Max: 29m 45s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
In the function def is_contiguous(arr: DeviceArrayLike) -> bool:
shape, strides = get_shape(arr), get_strides(arr)
if strides is None:
return True
if any(dim == 0 for dim in shape):
# array has no elements
return True
[---SNIPPED--] but we do not use def is_contiguous(arr: DeviceArrayLike) -> bool:
strides = get_strides(arr)
if strides is None:
return True
shape = get_shape(arr)
if any(dim == 0 for dim in shape):
# array has no elements
return True
[---SNIPPED--] |
python/cuda_parallel/cuda/parallel/experimental/_utils/protocols.py
Outdated
Show resolved
Hide resolved
🟩 CI finished in 34m 26s: Pass: 100%/1 | Total: 34m 26s | Avg: 34m 26s | Max: 34m 26s
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
CUB | |
Thrust | |
CUDA Experimental | |
+/- | python |
CCCL C Parallel Library | |
Catch2Helper |
🏃 Runner counts (total jobs: 1)
# | Runner |
---|---|
1 | linux-amd64-gpu-rtx2080-latest-1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me @shwina
Description
This PR addresses some of the performance issues found by @oleksandr-pavlyk' in #3213.
Changes introduced in this PR
Mainly, the performance improvement comes from the following:
Removing type validation between the calls to
Reduce.__init__
andReduce.__call__
: while this removes several guardrails, I think it's appropriate. Higher level APIs can hide theReduce
object from the user altogether and ensure that there is no way to pass objects of different dtype between the calls to__init__
and__call__
.Adding fast paths for
protocols.get_data_ptr
andprotocols.get_dtype
: introspecting__cuda_array_interface__
for the data pointer and dtype is slow. Until we can figure out a faster, more general way to get that information for different array types, this PR adds a fast path that works for CuPy (and Numba) arrays specifically. For other array types (like torch tensors for example), it will fall back to the regular (slower) path.Using CuPy to query the current device's compute capability: as described in Querying current device is slow compared to CuPy cuda-python#439, querying the CC is quite slow (using both Numba and CUDA-Python), compared to CuPy.
Results
The plot below shows the performance improvement that this PR brings to
reduce()
v/s the main branch:I used Sasha's benchmarking scripts here to generate these results.
Alternatives
One idea that came up in a conversation with @leofang: we could consider changing the API to not accept
__cuda_array_interface__
objects, and instead have the user pass in the required information (pointer, size, dtype, etc.,). This allows each library/user to compute that information in the most efficient way possible rather than making it our responsibility.Checklist