Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LoRA] Support Wan #10943

Merged
merged 6 commits into from
Mar 4, 2025
Merged

[LoRA] Support Wan #10943

merged 6 commits into from
Mar 4, 2025

Conversation

a-r-r-o-w
Copy link
Member

@a-r-r-o-w a-r-r-o-w commented Mar 3, 2025

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@a-r-r-o-w
Copy link
Member Author

Image-to-Video works as expected after the changes to the pipeline. Most of the changes are to address consistency across implementations and fixes to default values.

import torch
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image

# Available models: Wan-AI/Wan2.1-I2V-14B-480P, Wan-AI/Wan2.1-I2V-1.3B-720P
model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=6.0)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.to("cuda")

# height, width = 480, 832
height, width = 480, 704
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg")
prompt = (
    "An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
    "the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"

output = pipe(
    image=image,
    prompt=prompt,
    negative_prompt=negative_prompt,
    height=height,
    width=width,
    num_frames=81,
    num_inference_steps=30,
    guidance_scale=3.0
).frames[0]
export_to_video(output, "output2.mp4", fps=15)
output2.mp4

@a-r-r-o-w a-r-r-o-w added the roadmap Add to current release roadmap label Mar 3, 2025
@a-r-r-o-w a-r-r-o-w requested review from yiyixuxu and sayakpaul March 3, 2025 21:08
@a-r-r-o-w a-r-r-o-w marked this pull request as ready for review March 3, 2025 21:09
@@ -114,9 +115,9 @@ def __init__(self, in_features: int, out_features: int):
self.norm2 = nn.LayerNorm(out_features)

def forward(self, encoder_hidden_states_image: torch.Tensor) -> torch.Tensor:
hidden_states = self.norm1(encoder_hidden_states_image)
hidden_states = self.norm1(encoder_hidden_states_image.float()).type_as(encoder_hidden_states_image)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we cannot always assume self.norm1 is always in float32 here. i.e for layers in _keep_in_fp32_modules, user can still cast it into a different dtype if they do something like model.to(torch.float16)

maybe change to FP32LayerNorm ~

Comment on lines -302 to +306
num_channels_latents: 16,
height: int = 720,
width: int = 1280,
num_latent_frames: int = 21,
num_channels_latents: int = 16,
height: int = 480,
width: int = 832,
num_frames: int = 81,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will trust that these are safe enough changes for this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should be safe imo. These are the defaults used for inference and training, and the existing values were from copying Hunyuan pipeline I think.

Comment on lines -381 to +392
height (`int`, defaults to `720`):
height (`int`, defaults to `480`):
The height in pixels of the generated image.
width (`int`, defaults to `1280`):
width (`int`, defaults to `832`):
The width in pixels of the generated image.
num_frames (`int`, defaults to `129`):
num_frames (`int`, defaults to `81`):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay the above changes in prepare_latents() make sense to me.

Comment on lines +1597 to +1607
tower_name = (
"transformer_blocks"
if any(name == "transformer_blocks" for name in named_modules)
else "blocks"
)
transformer_tower = getattr(pipe.transformer, tower_name)
has_attn1 = any("attn1" in name for name in named_modules)
if has_attn1:
pipe.transformer.transformer_blocks[0].attn1.to_q.lora_A["adapter-1"].weight += float("inf")
transformer_tower[0].attn1.to_q.lora_A["adapter-1"].weight += float("inf")
else:
pipe.transformer.transformer_blocks[0].attn.to_q.lora_A["adapter-1"].weight += float("inf")
transformer_tower[0].attn.to_q.lora_A["adapter-1"].weight += float("inf")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay for me. Will remove the test here there:

def test_lora_fuse_nan(self):
(I will take care of it after this PR is merged)

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Results look juicy! Thanks for working on this!

@a-r-r-o-w
Copy link
Member Author

a-r-r-o-w commented Mar 4, 2025

Another example on this dataset: https://huggingface.co/datasets/finetrainers/3dgs-dissolve!

output.mp4

This is the 1.3B T2V model. Can't yet train the 14B T2V/I2V model because of how slow it would be so going to try and find time for the flash attention backend and context parallel in trainer this weekend!

@sayakpaul
Copy link
Member

Fantastic! Time for an aesthetic FT run

@nitinmukesh
Copy link

nitinmukesh commented Mar 4, 2025

@a-r-r-o-w

Looks amazing, are you planning to release the lora for us. :)

@sayakpaul
Copy link
Member

Will be made available through https://huggingface.co/finetrainers.

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@a-r-r-o-w
Copy link
Member Author

@nitinmukesh Ofcourse! Here you go: https://huggingface.co/finetrainers/Wan2.1-T2V-1.3B-3dgs-v0

It's a bit overtrained so needs lower lora strength to work. More amazing things on the way with the learnings in mind!

@a-r-r-o-w a-r-r-o-w merged commit 3ee899f into main Mar 4, 2025
28 of 30 checks passed
@a-r-r-o-w a-r-r-o-w deleted the lora/wan branch March 4, 2025 19:57
@nitinmukesh
Copy link

Awesome. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
roadmap Add to current release roadmap
Projects
Development

Successfully merging this pull request may close these issues.

5 participants