Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training model on multi-contrast (7T MP2RAGE and 3T T2w) images #84

Open
KaterinaKrejci231054 opened this issue Feb 10, 2025 · 5 comments

Comments

@KaterinaKrejci231054
Copy link
Contributor

KaterinaKrejci231054 commented Feb 10, 2025

This issue builds on issue #65, where different model training approaches were tested on MP2RAGE data. That discussion highlighted the positive impact of increasing the patch size in the SI direction on training. As a follow-up, an unbiased data split was introduced using a custom train-validation split, and models were retrained based on previous experiments.

Multi-contrast (MP2RAGE and T2w data) model training

Training with default vs. increased patch size

The focus of this issue is extending training to T2w data. Similar to the MP2RAGE models, increasing the patch size significantly improved both training and model performance, as shown in the training plots and figures below.

Multi-contrast model training with default patch size ([192, 96, 128])

Image

Multi-contrast model training log (with increased patch size in SI ([352, 96, 128]))

Image

Here are the statistical results of the Dice coefficient on the multi-contrast model test data with increased patch size.

Image

Here are visual examples of results on test data on 2 levels.

Image

Training with pretrained weights

Next, the model was trained using pretrained weights from this model (according to these instructions).
Specifically, the following adjustments were made for training to make the rootlets model compatible with the pretrained weights:

  • "patch_size": [160, 224, 64]
  • "strides": [1,1,1],[2,2,2],[2,2,2],[2,2,2],[2,2,2],[2,2,1].

However, it was found that this attempt was not very successful. With pretrained weights the model started to be trained only at level 4 and at epoch 220, while with the previous model (only with increased patch size and without pretrained weights) it started to be trained at this level already at epoch 100. Moreover, the previous level had a higher success on other levels as well.

Multi-contrast model training with pretrained weights

Image

@valosekj
Copy link
Member

Training with pretrained weights
...
"patch_size": [160, 224, 64]

Based on a discussion on Slack I had with @naga-karthik, it should be possible to modify the patch size to be divisible by:

16 in the RL
32 in the AP
32 in the SI

By doing so, you could get closer to the "increased patch size in SI" of ([352, 96, 128]), which seems to have a positive effect on the training.

@valosekj
Copy link
Member

valosekj commented Feb 10, 2025

Just for the record, contrast-agnostic r20250204's patch size corresponds with the following axes:

[160, 224, 64] (S-I, A-P, R-L)

@KaterinaKrejci231054
Copy link
Contributor Author

Thank you @valosekj, it seems that patch size [352, 96, 128] also corresponds to (S-I, A-P, R-L).

@valosekj
Copy link
Member

it seems that patch size [352, 96, 128] also corresponds to (S-I, A-P, R-L)

It would be great to be sure about this. Can we reliably say this based on the nnunet json plan file?

@KaterinaKrejci231054
Copy link
Contributor Author

If the axes are preserved according to this in nnunet json plan file:

"median_image_size_in_voxels": [367.0, 192.0, 241.0] (S-I, A-P, R-L)
"patch_size": [352, 96, 128] --> (S-I, A-P, R-L).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants