This repository investigates the influence of different data augmentation strategies on MRI training performance.
This repository contains:
- A nnUNet trainer with extensive data augmentations
- A basic Monai segmentation script incorporating data augmentations
- A script generating augmentations from input images and segmentations
-
Open a
bashterminal in the directory where you want to work. -
Create and activate a virtual environment using python >=3.10 (highly recommended):
- venv
python3 -m venv venv source venv/bin/activate- conda env
conda create -n myenv python=3.10 conda activate myenv -
Clone this repository:
- Git clone
git clone git@github.com:neuropoly/AugLab.git cd AugLab -
Install AugLab using one of the following commands:
Note: If you pull a new version from GitHub, make sure to rerun this command with the flag
--upgrade- nnunetv2 only usage
python3 -m pip install -e .[nnunetv2]
- full usage (with Monai and other dependencies)
python3 -m pip install -e .[all]
-
Install PyTorch following the instructions on their website. Be sure to add the
--upgradeflag to your installation command to replace any existing PyTorch installation. Example:
python3 -m pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu118 --upgradeTo use the AugLab trainer with nnUNet, first add the trainer to your nnUNet installation by running:
auglab_add_nnunettrainer --trainer nnUNetTrainerDAExtThen, when you run nnUNet training as usual, specifying the AugLab trainer, for example:
nnUNetv2_train 100 3d_fullres 0 -tr nnUNetTrainerDAExtGPU -p nnUNetPlansYou can also specify your data augmentation parameters by providing a JSON file using the environment variable AUGLAB_PARAMS_GPU_JSON:
Note: By default auglab/configs/transform_params_gpu.json is used if no file is specified.
AUGLAB_PARAMS_GPU_JSON=/path/to/your/params.json nnUNetv2_train 100 3d_fullres 0 -tr nnUNetTrainerDAExtGPU -p nnUNetPlans
⚠️ Warning : To avoid any paths issues, please specify an absolute path to your JSON file.
To use AugLab augmentations in a MONAI training pipeline, refer to the example training script. Key implementation lines required for proper integration are marked with a 🐞 emoji in the comments.
To run the Monai training script directly, you need to provide a config JSON (config.json) file with paths to the images and labels (ground truth) for TRAINING, VALIDATION and TESTING sets like this:
{
"TYPE": "LABEL",
"TRAINING": [
{
"IMAGE": "/path/to/image1.nii.gz",
"LABEL": "/path/to/label1.nii.gz"
},
{
"IMAGE": "/path/to/image2.nii.gz",
"LABEL": "/path/to/label2.nii.gz"
}
],
"VALIDATION": [
{
"IMAGE": "/path/to/image3.nii.gz",
"LABEL": "/path/to/label3.nii.gz"
},
{
"IMAGE": "/path/to/image4.nii.gz",
"LABEL": "/path/to/label4.nii.gz"
}
],
"TESTING": [
{
"IMAGE": "/path/to/image5.nii.gz",
"LABEL": "/path/to/label5.nii.gz"
},
]
}Then run the training script with the following command, specifying the path to your config JSON file and the path to your data augmentation parameters JSON file (if you want to use custom parameters, otherwise the default transform_params_gpu.json is used):
python scripts/train_monai.py --config <your_path>/config.json --transforms <your_path>/transform_params_gpu.jsonAdditional parameters can be specified—see python scripts/train_monai.py -h for details. If anything is unclear, feel free to open an issue.
Scripts developped in this repository use JSON files to specify image and segmentation paths: see this example.
To track parameters used during data augmentation, JSON files are also used: see this example