Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Spatial loss #44

Open
DanahYatim opened this issue Sep 4, 2024 · 1 comment
Open

Question about Spatial loss #44

DanahYatim opened this issue Sep 4, 2024 · 1 comment

Comments

@DanahYatim
Copy link

Hi, Thank you so much for sharing your amazing work!!!

In the paper it is mentioned that the spatial LoRAs are trained on a single frame randomly sampled from the training video to fit its appearance while ignoring its motion, based on spatial loss, which is reformulated as -
image

I wanted to ask about the reasoning of choosing to pass a single frame to the 3D U-Net vs. just passing all frames for training the spatial LoRAs? If we take into account the fact that the pretrained T2V model was trained on videos, why does it make sense to pass a single frame for this loss? is the model capable of generating a single frame?

Thanks

@ruizhaocv
Copy link
Collaborator

Hi. The spatial LoRAs are just injected into spatial layers, which are independent of the number of frames.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants