-
Notifications
You must be signed in to change notification settings - Fork 78
v1 #270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1 #270
Conversation
Co-authored-by: Will Lin <[email protected]>
Co-authored-by: Peiyuan Zhang <[email protected]>
Signed-off-by: <> Co-authored-by: Will Lin <[email protected]> Co-authored-by: Ubuntu <ubuntu@awesome-gpu-name-8-inst-2tbsnfodvpomxv4tukw2dkfgyvz.c.nv-brev-20240723.internal> Co-authored-by: Ubuntu <ubuntu@awesome-gpu-name-9-inst-2tpydiudxfu1jg9xvpflm7oexie.c.nv-brev-20240723.internal>
…/FastVideo into rebased-refactor
Add wan dit
Co-authored-by: SolitaryThinker <[email protected]>
c076f8a
to
baa5b0e
Compare
Co-authored-by: Peiyuan Zhang <[email protected]>
Co-authored-by: Peiyuan Zhang <[email protected]> Co-authored-by: Will Lin <[email protected]>
return torch.stack((o1, o2), dim=-1).flatten(-2) | ||
|
||
|
||
@CustomOp.register("rotary_embedding") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a little bit surprised that we need to customize for rotary embedding
class?
uses_last_layer = feature_sample_layers[-1] in (len(hs_pool) - 1, -1) | ||
if post_layer_norm is not None and uses_last_layer: | ||
hs_pool[-1] = post_layer_norm(encoder_outputs) | ||
return torch.cat(hs_pool, dim=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
space at EoF
from fastvideo.v1.inference_args import InferenceArgs | ||
from fastvideo.v1.logger import init_logger | ||
|
||
from ..composed_pipeline_base import ComposedPipelineBase |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw a lot of relative imports throughout; can we avoid relative imports?
def __init__(self, vae) -> None: | ||
self.vae = vae | ||
|
||
def forward( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one weird thing i noticed: you only use forward context in the text_encoding.py
, not anywhere else? correct? if yes -- why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we also use it for DiTs (set in the denoising stage).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input validation being a stage sounds strange, but i guess ok currently.
[ ] Attn backend (PY)
[ ] Wan text encoder (PY)
[ ] wan vae (Wei)
[ ] wan pipeline (PY & Wei)
[ ] Merge wan dit code to refactor (Will)
[ ] Clean up code & loader directory(Will)
[ ] hunyuan text encoder (Will)