Skip to content

Commit

Permalink
featured projects (#270)
Browse files Browse the repository at this point in the history
* update

* update

* Apply suggestions from code review

Co-authored-by: Sayak Paul <[email protected]>

---------

Co-authored-by: Sayak Paul <[email protected]>
  • Loading branch information
a-r-r-o-w and sayakpaul authored Feb 24, 2025
1 parent 61d14a7 commit 41dd338
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 3 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,10 @@ cython_debug/
wandb/
*.txt
dump*
*dummy*
outputs*
*.slurm
.vscode/
*.json

!requirements.txt
20 changes: 17 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# finetrainers 🧪

`cogvideox-factory` was renamed to `finetrainers`. If you're looking to train CogVideoX or Mochi with the legacy training scripts, please refer to [this](./training/README.md) README instead. Everything in the `training/` directory will be eventually moved and supported under `finetrainers`.

FineTrainers is a work-in-progress library to support (accessible) training of video models. Our first priority is to support LoRA training for all popular video models in [Diffusers](https://github.com/huggingface/diffusers), and eventually other methods like controlnets, control-loras, distillation, etc.

> [!NOTE]
>
> `cogvideox-factory` was renamed to `finetrainers`. If you're looking to train CogVideoX or Mochi with the legacy training scripts, please refer to [this](./examples/_legacy/) README instead.
<table align="center">
<tr>
<td align="center"><video src="https://github.com/user-attachments/assets/aad07161-87cb-4784-9e6b-16d06581e3e5">Your browser does not support the video tag.</video></td>
Expand Down Expand Up @@ -153,7 +155,19 @@ For inference, refer [here](./docs/training/ltx_video.md#inference). For docs re

If you would like to use a custom dataset, refer to the dataset preparation guide [here](./docs/dataset/README.md).

## Featured Projects 🔥

Checkout some amazing projects citing `finetrainers`:
- [SkyworkAI's SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1)
- [eisneim's LTX Image-to-Video](https://github.com/eisneim/ltx_lora_training_i2v_t2v/)
- [wileewang's TransPixar](https://github.com/wileewang/TransPixar)
- [Feizc's Video-In-Context](https://github.com/feizc/Video-In-Context)

Checkout the following UIs built for `finetrainers`:
- [jbilcke's VideoModelStudio](https://github.com/jbilcke-hf/VideoModelStudio)
- [neph1's finetrainers-ui](https://github.com/neph1/finetrainers-ui)

## Acknowledgements

* `finetrainers` builds on top of a body of great open-source libraries: `transformers`, `accelerate`, `peft`, `diffusers`, `bitsandbytes`, `torchao`, `deepspeed` -- to name a few.
* Some of the design choices of `finetrainers` were inspired by [`SimpleTuner`](https://github.com/bghira/SimpleTuner).
* Some of the design choices were inspired by [`SimpleTuner`](https://github.com/bghira/SimpleTuner).

0 comments on commit 41dd338

Please sign in to comment.