This workflow segments T2 hyper-intense lesions in FLAIR images and includes a manual correction step to further refine the segmentation. The workflow expects a T2 FLAIR and then produces class-confidence maps (softmaps), a segmentation map, and a PDF report providing a visual overview of the segmented lesions, the number of lesions and the volume for each of the lesions found.
The segmentation map consists of the following value to class mapping:
- 0: non-lesion tissue and background
- 1: lesion
The white matter hyperintensity lesion segmentation tool included in the workflow consists primarily of a Convolutional Neural Network trained on a combination of proprietary and public databases of T2-FLAIR scans of patients with Multiple Sclerosis.
It includes an image pre-processing pipeline that resamples, skull-strips and denoises using curvature flow the input images to be prepared for inference with the aforementioned model. The model generates confidence-score maps and the final prediction map is obtained by assigning to each voxel the class with the highest confidence.
The output lesion mask along with the preprocessed T2-FLAIR image are passed to a manual step in which the user can add, remove and correct the false positives and false negatives of the prediction.
Afterwards, the segmentation map is used to compute the volume of the different lesions. This information is included in the PDF report.
- FLAIR: T2 FLAIR image. Must be labeled as 'T2' modality and 'flair' tag.
Minimum input requirements
- Whole-brain FOV is recommended.
- 3D image is highly recommended. The preferred resolution is 1mm isotropic in all cases.
Isotropic resampling to 1mm (checkbox). Controls whether isotropic resampling to 1mm is executed or not. As the model was trained on images with such resolution, it is compulsory to have the inputs passed to the model at such resolution. Therefore, this option should be unchecked only when the input image has already 1mm isotropic resolution.
Skull stripping (checkbox). Controls whether the skull-stripping step should be run or not. The segmentation network was trained exclusively with skull-stripped data, so the inclusion of input images with skull would introduce confounding information that might worsen the results provided by the tool.
Denoising (checkbox). Controls whether the image is denoised using curvature driven flow. The segmentation network was trained on denoised data following this method, so skipping this step might worsen the results provided by the tool.
Output container files
- report.pdf: PDF report.
- NIfTI files:
- T2 FLAIR t2_flair.nii.gz: Pre-processed T2-FLAIR used to perform inference on the DL model.
- LESION MASK lesion_mask.nii.gz: Segmentation mask, with non-lesion and background (0) and lesion (1)
- LESION LABELS lesion_labels.nii.gz: Labeled lesions, in which each unique lesion is assigned a unique identifier (number)
- online_summary_report.html: Online report, visible in the second tab of "Show results".
- hist_volumes_lesions.png: Histogram of the lesions' volume.
- T2_overlay_lesions.png: T2 slices with lesions overlaid.
- T2_sclices.png: T2 slices.
- VOLUMES detail_lesions_info.csv: CSV with the volume quantification of the different lesions.
- VOLUMES global_lesions_info.csv: CSV with the total number of lesions and the total lesion volume.
- ANTs registration: Avants et al. 2008
- ROBEX skull-stripping: Iglesias et al. 2011
- CNN architecture: Bui, T. D. et al. 2019
- Loss function for training: Hashemi, Seyed Raein, et al. 2017
Create free account now!