-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantized Flux not working #2511
Comments
This error is most likely not due to the model itself but rather to the cuda setup. The bf16 kernels are predicated by the following line: #if __CUDA_ARCH__ >= 800
...
#endif This makes the kernels only available when the cuda arch set up by the nvcc compiler is above 8 so it's likely not the case in your setup. It would be interesting to see which value |
this machine has 2 GPUs. When I run the Stable diffusion Examples it uses the ADA 4000 with a 8.9 compute cap.
how do i see the value of CUDA_ARCH |
That first gpu is most likely creating the issue, did you trying using |
When CUDA_VISIBLE_DEVICES is set to the correct device and i can see that the correct GPU is used in
|
Probably good to clean your |
I did a clean and its the same error. not sure where to look next. |
Hum seems weird that candle can use the older card if |
Actually it is pointing to the right card. it's using the correct card. CUDA_VISIBLE_DEVICES works as it should, there is no problem there. it's using the correct card and crashing.
|
just checking back in. no idea how to troubleshoot this.
and on Mac M1 we get error
|
hi. gettin an error. on my ADA RTX 4000 machine that supports BF16 and that runs Stable Diffusion just fine. I get an error on the quantized FLUX update.
running with no model specified or dev or schnell
error
any ideas?
The text was updated successfully, but these errors were encountered: