-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LoRA] Support Wan #10943
[LoRA] Support Wan #10943
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Image-to-Video works as expected after the changes to the pipeline. Most of the changes are to address consistency across implementations and fixes to default values. import torch
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image
# Available models: Wan-AI/Wan2.1-I2V-14B-480P, Wan-AI/Wan2.1-I2V-1.3B-720P
model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=6.0)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.to("cuda")
# height, width = 480, 832
height, width = 480, 704
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg")
prompt = (
"An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
"the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
height=height,
width=width,
num_frames=81,
num_inference_steps=30,
guidance_scale=3.0
).frames[0]
export_to_video(output, "output2.mp4", fps=15) output2.mp4 |
@@ -114,9 +115,9 @@ def __init__(self, in_features: int, out_features: int): | |||
self.norm2 = nn.LayerNorm(out_features) | |||
|
|||
def forward(self, encoder_hidden_states_image: torch.Tensor) -> torch.Tensor: | |||
hidden_states = self.norm1(encoder_hidden_states_image) | |||
hidden_states = self.norm1(encoder_hidden_states_image.float()).type_as(encoder_hidden_states_image) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we cannot always assume self.norm1
is always in float32
here. i.e for layers in _keep_in_fp32_modules
, user can still cast it into a different dtype if they do something like model.to(torch.float16)
maybe change to FP32LayerNorm ~
num_channels_latents: 16, | ||
height: int = 720, | ||
width: int = 1280, | ||
num_latent_frames: int = 21, | ||
num_channels_latents: int = 16, | ||
height: int = 480, | ||
width: int = 832, | ||
num_frames: int = 81, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will trust that these are safe enough changes for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, should be safe imo. These are the defaults used for inference and training, and the existing values were from copying Hunyuan pipeline I think.
height (`int`, defaults to `720`): | ||
height (`int`, defaults to `480`): | ||
The height in pixels of the generated image. | ||
width (`int`, defaults to `1280`): | ||
width (`int`, defaults to `832`): | ||
The width in pixels of the generated image. | ||
num_frames (`int`, defaults to `129`): | ||
num_frames (`int`, defaults to `81`): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay the above changes in prepare_latents()
make sense to me.
tower_name = ( | ||
"transformer_blocks" | ||
if any(name == "transformer_blocks" for name in named_modules) | ||
else "blocks" | ||
) | ||
transformer_tower = getattr(pipe.transformer, tower_name) | ||
has_attn1 = any("attn1" in name for name in named_modules) | ||
if has_attn1: | ||
pipe.transformer.transformer_blocks[0].attn1.to_q.lora_A["adapter-1"].weight += float("inf") | ||
transformer_tower[0].attn1.to_q.lora_A["adapter-1"].weight += float("inf") | ||
else: | ||
pipe.transformer.transformer_blocks[0].attn.to_q.lora_A["adapter-1"].weight += float("inf") | ||
transformer_tower[0].attn.to_q.lora_A["adapter-1"].weight += float("inf") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay for me. Will remove the test here there:
def test_lora_fuse_nan(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Results look juicy! Thanks for working on this!
Another example on this dataset: https://huggingface.co/datasets/finetrainers/3dgs-dissolve! output.mp4This is the 1.3B T2V model. Can't yet train the 14B T2V/I2V model because of how slow it would be so going to try and find time for the flash attention backend and context parallel in trainer this weekend! |
Fantastic! Time for an aesthetic FT run |
Looks amazing, are you planning to release the lora for us. :) |
Will be made available through https://huggingface.co/finetrainers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
@nitinmukesh Ofcourse! Here you go: https://huggingface.co/finetrainers/Wan2.1-T2V-1.3B-3dgs-v0 It's a bit overtrained so needs lower lora strength to work. More amazing things on the way with the learnings in mind! |
Awesome. Thank you. |
Training: a-r-r-o-w/finetrainers#281
Dummy model: https://huggingface.co/finetrainers/Wan2.1-T2V-1.3B-crush-smol-v0
cc @yiyixuxu for pipeline related changes