Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support of multi-modalities in input data transformations and augmentation #1

Open
tlorieul opened this issue Oct 13, 2022 · 0 comments
Labels
enhancement New feature or request

Comments

@tlorieul
Copy link
Contributor

tlorieul commented Oct 13, 2022

It is currently not clear how to adapt the input data transformation and augmentation system to multi-modal data when the modalities can not be put together in a single tensor (e.g., when the patches are of different sizes, etc.).
This case happens quite often in practice and should be handled properly.

If there are some randomized transforms (typically with data augmentation), the modalities can not be processed totally separately otherwise different transforms will be applied to each of them.

To solve this, we might need to use functional transforms to handle the seed of the randomizers manually.

@tlorieul tlorieul added the enhancement New feature or request label Oct 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant