-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Wan] Potential bug. #306
Comments
Hey, sorry for the inconvinience. I just fixed the issue. It wasn't caught by the unit tests because of a bug in the tests causing everything to pass by default. I'll do some more improvements on the actual reason for this problem soon. BTW, please hold of a bit longer from training with Wan. A change was made upstream in diffusers which causes current training to be broken: huggingface/diffusers#10998. I'll work on the fix asap |
Opened #308 to fix the scaling related changes from upstream. I've queued a run to verify it's correct and same as before, so I'll update here once that's done |
No problem. I understand perfectly. Take your time man. I want to check the training over the Wan model but I'm very interesting about the SkyReels training too, if the I2V fit under 16GB ;) |
I've update the latest main branch and start a Wan LoRA training over same corpus than a previous test. I don't remember when and with which version of the repo, but I had managed to start training on the Wan1.3 model. I had stopped training at a few hundred steps, the training was running at about twenty seconds per iteration but it was running on my 16GB GPU.
By the way, I made a request to kijai about loading LoRA with his ComfyUI node. You can download one of the previously trained checkpoints in one of my messages.
kijai/ComfyUI-WanVideoWrapper#176
Here, I use the
--enable_precomputation
and as you can see, take ~30 minutes on my GPU. But right after the precomputation, I have this bug.The text was updated successfully, but these errors were encountered: