You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thank you so much for sharing your amazing work!!!
In the paper it is mentioned that the spatial LoRAs are trained on a single frame randomly sampled from the training video to fit its appearance while ignoring its motion, based on spatial loss, which is reformulated as -
I wanted to ask about the reasoning of choosing to pass a single frame to the 3D U-Net vs. just passing all frames for training the spatial LoRAs? If we take into account the fact that the pretrained T2V model was trained on videos, why does it make sense to pass a single frame for this loss? is the model capable of generating a single frame?
Thanks
The text was updated successfully, but these errors were encountered:
Hi, Thank you so much for sharing your amazing work!!!
In the paper it is mentioned that the spatial LoRAs are trained on a single frame randomly sampled from the training video to fit its appearance while ignoring its motion, based on spatial loss, which is reformulated as -
I wanted to ask about the reasoning of choosing to pass a single frame to the 3D U-Net vs. just passing all frames for training the spatial LoRAs? If we take into account the fact that the pretrained T2V model was trained on videos, why does it make sense to pass a single frame for this loss? is the model capable of generating a single frame?
Thanks
The text was updated successfully, but these errors were encountered: