You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue builds on issue #65, where different model training approaches were tested on MP2RAGE data. That discussion highlighted the positive impact of increasing the patch size in the SI direction on training. As a follow-up, an unbiased data split was introduced using a custom train-validation split, and models were retrained based on previous experiments.
Multi-contrast (MP2RAGE and T2w data) model training
Training with default vs. increased patch size
The focus of this issue is extending training to T2w data. Similar to the MP2RAGE models, increasing the patch size significantly improved both training and model performance, as shown in the training plots and figures below.
Multi-contrast model training with default patch size ([192, 96, 128])
Multi-contrast model training log (with increased patch size in SI ([352, 96, 128]))
Here are the statistical results of the Dice coefficient on the multi-contrast model test data with increased patch size.
Here are visual examples of results on test data on 2 levels.
Training with pretrained weights
Next, the model was trained using pretrained weights from this model (according to these instructions).
Specifically, the following adjustments were made for training to make the rootlets model compatible with the pretrained weights:
However, it was found that this attempt was not very successful. With pretrained weights the model started to be trained only at level 4 and at epoch 220, while with the previous model (only with increased patch size and without pretrained weights) it started to be trained at this level already at epoch 100. Moreover, the previous level had a higher success on other levels as well.
Multi-contrast model training with pretrained weights
The text was updated successfully, but these errors were encountered:
This issue builds on issue #65, where different model training approaches were tested on MP2RAGE data. That discussion highlighted the positive impact of increasing the patch size in the SI direction on training. As a follow-up, an unbiased data split was introduced using a custom train-validation split, and models were retrained based on previous experiments.
Multi-contrast (MP2RAGE and T2w data) model training
Training with default vs. increased patch size
The focus of this issue is extending training to T2w data. Similar to the MP2RAGE models, increasing the patch size significantly improved both training and model performance, as shown in the training plots and figures below.
Multi-contrast model training with default patch size ([192, 96, 128])
Multi-contrast model training log (with increased patch size in SI ([352, 96, 128]))
Here are the statistical results of the Dice coefficient on the multi-contrast model test data with increased patch size.
Here are visual examples of results on test data on 2 levels.
Training with pretrained weights
Next, the model was trained using pretrained weights from this model (according to these instructions).
Specifically, the following adjustments were made for training to make the rootlets model compatible with the pretrained weights:
However, it was found that this attempt was not very successful. With pretrained weights the model started to be trained only at level 4 and at epoch 220, while with the previous model (only with increased patch size and without pretrained weights) it started to be trained at this level already at epoch 100. Moreover, the previous level had a higher success on other levels as well.
Multi-contrast model training with pretrained weights
The text was updated successfully, but these errors were encountered: