The Beauty of Seeing Beyond the Visible: Automating the DCE-MRI Analysis of Brain Tumors

cover of post automated brain tumors segmentation and analysis from DCE-MRI

Magnetic resonance imaging (MRI) plays a key role in modern cancer care because it allows us to non-invasively diagnose a patient, determine the cancer stage, monitor the treatment, assess and quantify its results, and understand its potential side effects in virtually all organs. MRI may be exploited to better understand both structural and functional characteristics of the tissue. Such detailed and clinically-relevant analysis of an imaged tumor can help design more personalized treatment pathways, and ultimately lead to a better patient care. Additionally, MRI does not use the damaging ionizing radiation, and may be utilized to acquire images in different planes and orientations. MRI (with and without contrast) is the investigative tool of choice for neurological cancers – for brain tumors, we commonly acquire multi-modal MRI, including T1-weighted (contrast and non-contrast), T-weighted, Fluid Attenuation Inversion Recovery (FLAIR) sequences, alongside diffusion and perfusion images [1].

Why to automate the analysis process?

Detection and segmentation of brain lesions from MRI are the critical steps in analyzing an input MRI study, as they significantly influence the further steps of the lesion analysis, e.g., extracting its quantifiable characteristics, such as bidimensional or volumetric measurements. Incorrect delineation may easily lead to improper interpretation of the captured scan and adversely affect the treatment pathway [2]. A tremendous amount of MRI data generated every day drives the development of machine learning brain lesion segmentation systems. However, such data is extremely imbalanced (only the minority of all pixels or voxels in 3D present a tumorous tissue), it is very large and heterogeneous. This heterogeneity may be not only due to different scanners and/or protocols utilized to capture MRI, but may also be a result of various tumor characteristics and intrinsic features captured in the image. Therefore, manual delineation of MRI scans is time-consuming and tedious, and generating high-quality manually-segmented brain lesions (that could be exploited for supervised training) is challenging in practice. Also, there might be discrepancies and disagreements between readers delineating the very same scan (or even between two segmentations provided by the same reader at different time points), and – in general – the quality of manual segmentations significantly vary [3]. This questionable quality of manually-generated ground-truth information is an important problem in the case of supervised methods, because they directly affect the quality of a trained model.

Fully-automated medical-image segmentation pipelines, e.g., exploiting image analysis and machine learning, are thus of great interest, as they can accelerate diagnosis, ensure reproducibility, and make comparisons much easier (e.g., comparing the manual delineations performed by two readers at two different oncology centers alongside the extracted brain-tumor numerical features is extremely challenging if they did not follow the same segmentation protocol). Also, we are aimed at decreasing the overall analysis time. Finally, we want to reduce false negatives, being the tumorous pixels/voxels that were incorrectly classified as healthy by a machine learning model.

The beauty of seeing in four dimensions

In dynamic contrast-enhanced MRI (DCE-MRI), once the contrast bolus is injected, it captures the voxel intensity changes within a volume of interest (e.g., a brain tumor), and it allows for quantifying the dynamic processes within a tissue based on such temporal changes of voxel intensities (hence, our fourth dimension is time). Biomarkers extracted from DCE-MRI can be used in patient prognosis, risk assessment and quantification of tumor characteristics and stage [4]. In DCE-MRI, a contrast agent flows through the vascular system to the analyzed volume which is reflected in the observed signal. The analysis process of such imaging involves manual (or semi-automated) delineation of a volume of interest, alongside the vascular input region that is used to calculate the vascular input function (VIF). Although it is possible to determine the perfusion parameters without calculating the contrast concentration in VIF, it may result in inaccurate perfusion parameters.

Do we have to suffer from all drawbacks of the manual analysis though?

graphs from the analysis of the algorithm for automatic segmentation of the brain and PACS workflow

Figure 1: Automated Dynamic Contrast-Enhanced Magnetic Resonance Imagining analysis – Superior sagittal sinus segmentation (SSS) (T1 VIBE): a) Original T1 VIBE image, b) candidate SSS regions, c) 3D rendering of SSS after pruning false-candidates, d) 3D rendering of SSS segmented using our approach; Brain tumor detection & segmentation (T2): e) Original T2 image. f) Original T2 image with our segmentation – green area represents agreement with a reader, red area represents false negatives, while blue area represents false positives (DICE=0.954). Generation of parameter maps, curves, DCE biomarkers extraction: g) Values of Ktrans and h) Ve are the results of fits of the Tofts’ model (pixelwise), superimposed onto the original image – orange contour represents the tumor, pink – superior sagittal sinus (SSS). i) Orange points represent an average contrast concentration in the area of the tumor, the orange curve represents the fit of the model. j) Pink points represent an average contrast concentration within SSS, the pink curve is the fit of an Artery Input Function. The pharmacokinetic parameters obtained from the fits are Ktrans = 0.0135 [min-1] and ve = 0.0084. Note that the SSS segmentation and brain tumor detection & segmentation steps are executed in parallel to speed up processing (we rendered a step which exploits deep learning & GPU processing in orange).

 To accelerate the analysis process, and to make it fully reproducible, we proposed an automated deep learning-powered system for extracting biomarkers from DCE-MRI [5] – for each pixel (voxel), we analyze its temporal contrast-flow characteristics. Additionally, we generate the parametric maps that visualize the perfusion parameters in the compartment model (see the example artifacts gathered in Figure 1). Here, we extracted Ktrans and ve, being the influx volume transfer constant (or the permeability surface area product per unit volume of tissue between plasma and extravascular extracellular space, EES), and the volume of EES per unit volume of tissue, respectively. Two algorithms were proposed in the system – an algorithm for determining the vascular input region which benefits from fast image-processing techniques (in both 2D and 3D), alongside an algorithm for segmenting brain tumors (from one or more MR sequences). In the latter case, we exploited U-Net-based architectures, in which an ensemble of such models was trained over non-overlapping training sets. Also, we introduced a new VIF model, being an extension of the linear model – it was used in the pharmacokinetic modeling. All the algorithms were thoroughly verified over the benchmark and clinical data (for brain-tumor segmentation). The multi-faceted experimental analysis included the mean opinion score experiment (for more details on the mean opinion score experiments, see the blog post by Marek Pitura: – 12 experienced readers (medical physicists and radiologists, 3 to 11 years of experience) were quantitatively evaluating the segmentations, both ground-truth and elaborated using the proposed technique.

graphs from the analysis of the algorithm for automatic segmentation of the brain and PACS workflow


Also, we verified the influence of the segmentation accuracy on the extracted perfusion parameters of the analyzed tumor. Here, an obvious question may pop up: does segmentation accuracy affect extraction of DCE-MRI biomarkers? To verify this impact, we simulated inscribed (left panel in Figure 2) and circumscribed (right panel in Figure 2) spheres which approximate manual segmentation and statistically compared the Ktrans and ve values. Note that the volume of the corresponding spheres is significantly different from the volume of the annotated BT (we report the volume ratios above the spheres). Interestingly, Bland-Altman analysis showed high level of agreement between perfusion parameters (Ktrans & ve) for GT, circumscribed and inscribed spheres approximating GT (Figure 3).

The experimental results were fundamental in the clinical validation report of our product, Sens.AI ( A clinical validation report is a document that is pivotal in the process of applying for the CE mark for medical devices. The component for detecting and segmenting brain tumors from MRI (FLAIR) has been CE-certified and it is a medical device now (IIa class, no: TNP/MDD/0308/4651/2020). Such thorough and evidence-based verification and validation is always the key in medical image analysis – we need to precisely understand what does work and what does not (and why).

Concluding remarks

Dynamic contrast-enhanced magnetic resonance imaging plays an important role in diagnosis and grading of brain tumors. Quantitative DCE biomarkers provide information on tumor prognosis & characteristics. However, their manual extraction is time-consuming and prone to human errors.  In this study, we introduced an automated approach to extract DCE-MRI from brain-tumor MRI without user intervention, and verified all of its critical components over benchmark and clinical data. For vascular input region determination, we used 3D (connected-component) analysis of thresholded T1 sequences, whereas brain tumors are detected and segmented using a voting ensemble of U-Net-based deep neural networks. The piecewise continuous regression was applied for estimating bolus-arrival time and employ a new cubic model for pharmacokinetic modeling. To this end, we fully automated the end-to-end DCE-MRI process (which is faster than pouring a cup of coffee, as it takes less than 3 minutes for an input MRI scan) and made it independent of human errors and bias. Can the process be further enhanced? Sure it can – how about extracting radiomic features from the original MRI scan, and perhaps from the parametric maps to get the full picture?

Isn’t it beautiful to focus on important clinical decisions and to leave things that can be automated to the machine? Let’s see beyond the visible for the better future of brain tumor patients.

graphs from the analysis of the algorithm for automatic segmentation of the brain tumors

Figure 3: Bland-Altman analysis showed high level of agreement between perfusion parameters (Ktrans & ve) for GT, circumscribed and inscribed spheres approximating GT.


 [1] J. E. Villanueva-Meyer, M. C. Mabray, and S. Cha, Current Clinical Brain Tumor Imaging, Neurosurgery, vol. 81, no. 3, pp. 397 – 415, 05 2017.

[2] R. Meier, U. Knecht, T. Loosli, S. Bauer, J. Slotboom, R. Wiest, and M. Reyes, Clinical evaluation of a fully-automatic segmentation method for longitudinal brain tumor volumetry, Scientific Reports, vol. 6, no. 1, pp. 23376, 2016.

[3] M. Visser, D. Müller, R. [van Duijn], M. Smits, N. Verburg, E. Hendriks, R. Nabuurs, J. Bot, R. Eijgelaar, M. Witte, M. [van Herk], F. Barkhof, P. [de Witt Hamer], and J. [de Munck], Inter-rater agreement in glioma segmentations on longitudinal MRI, NeuroImage: Clinical, vol. 22, pp. 101727, 2019.

[4] C. Cuenod and D. Balvay, Perfusion and vascular permeability: Basic concepts and measurement in DCE-CT and DCE-MRI, Diagnostic and Interventional Imaging, vol. 94, no. 12, pp. 1187 – 1204, 2013.

[5] J. Nalepa, P. Ribalta Lorenzo, M. Marcinkiewicz, B. Bobek-Billewicz, P. Wawrzyniak, M. Walczak, M. Kawulok, W. Dudzik, K. Kotowski, I. Burda, B. Machura, G. Mrukwa, P. Ulrych, M. P. Hayball, Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors, Artificial Intelligence in Medicine, vol. 102, pp. 101769, 2020.

Contact us if you have any questions!