<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Graylight Imaging</title>
	<atom:link href="https://graylight-imaging.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://graylight-imaging.com/</link>
	<description>Medical Imaging Software</description>
	<lastBuildDate>Tue, 17 Mar 2026 15:13:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>AutoRECIST project – progress on the groundbreaking software</title>
		<link>https://graylight-imaging.com/blog/autorecist-project-progress-on-the-groundbreaking-software-project/</link>
					<comments>https://graylight-imaging.com/blog/autorecist-project-progress-on-the-groundbreaking-software-project/#respond</comments>
		
		<dc:creator><![CDATA[Agnieszka Klich-Dubik]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 14:05:57 +0000</pubDate>
				<category><![CDATA[AI in Healthcare]]></category>
		<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Medical image analysis]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical AI]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<category><![CDATA[R&D in medical field]]></category>
		<guid isPermaLink="false">https://origin.graylight-imaging.com/?p=285190</guid>

					<description><![CDATA[<p>AutoRECIST is an innovative AI-based software solution that automates the assessment of treatment response in metastatic breast cancer in accordance with RECIST. </p>
<p>The post <a href="https://graylight-imaging.com/blog/autorecist-project-progress-on-the-groundbreaking-software-project/">AutoRECIST project – progress on the groundbreaking software</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_0 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_0">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_0  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_0  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>We are developing the project “AutoRECIST – software to assist a radiologist in assessing the effectiveness of oncological treatment for female patients with breast cancer tumours metastatic to the lungs, liver, brain and lymph nodes in the RECIST system”.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_1  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>The aim of the AutoRECIST project</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_2  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The AutoRECIST project aims to develop advanced medical software to significantly improve the assessment of treatment effectiveness in patients with metastatic breast cancer. The system will assist radiologists in analysing changes in the lungs, liver, brain, and lymph nodes in accordance with international RECIST standards. The core value of the solution will be the sophisticated use of artificial intelligence algorithms. These will allow the programme to automatically identify, segment, and analyse tumours visible on CT and MRI images. It will perform measurements in line with RECIST guidelines and calculate the volume of each change. A crucial feature of the project is the automatic tumour-tracking function during successive examinations of the same patient, enabling the monitoring of disease progression or regression over time. Currently, there is no similar tool available on the European or Polish market, highlighting the project&#8217;s unique nature.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_3  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>What are the primary tasks of the project?</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_4  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The project encompasses the following activities:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_5  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><ul>
<li>Selection of suitable scans from the resources of the Institute of Oncology in Gliwice,</li>
<li>Preparation of manual contours of cancerous lesions,</li>
<li>Development and testing of artificial intelligence algorithms for the detection, segmentation, and analysis of metastatic tumours,</li>
<li>Creation of a user interface tailored to clinical work,</li>
<li>Validation of the final solution in real medical practice.</li>
</ul></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_6  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>All algorithms developed in the project focus on the segmentation of metastases originating from breast cancer.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_7  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>Current status of the project &#8211; what has already been completed</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_8  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The AutoRECIST project is entering its final phase. The team of specialists working on the project has already completed work on algorithms responsible for detecting metastatic lesions in the brain and lungs, and their operation has been successfully verified. Progress is also evident in liver metastasis detection — the algorithm has been developed and is currently undergoing detailed verification. As for the algorithm responsible for diagnosing metastases to the lymph nodes, development work remains ongoing. As can be seen, the pace of progress is high – a significant part of the project activities has already been completed, and the remaining elements are being systematically finalised. Thanks to this, the AutoRECIST project is steadily approaching the full achievement of its goals.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_9  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Added value, or the reason behind the AutoRECIST project</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_10  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The effect achieved through the AutoRECIST project, that is, the development and subsequent incorporation into clinical practice of the medical device described above, will positively influence radiologists&#8217; work. Consequently, it will also benefit the treatment of patients with breast cancer. What will this positive impact involve? The software will enhance the accuracy of assessing oncological treatment effectiveness using the RECIST system, ensuring greater objectivity and significantly accelerating the diagnostic process. This will enable radiologists to monitor therapy responses more swiftly and precisely, thereby directly improving the quality of clinical decisions. The consistent efforts of the Graylight Imaging team bring us closer to developing a tool that could become a vital element in oncological diagnostics.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_11  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Co-financing from European funds</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_12  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The project is being implemented thanks to co-financing from the European Union. Funding was provided by the Medical Research Agency. Financial support is provided under the National Recovery and Resilience Plan (NRRP) and the EU&#8217;s NextGenerationEU (NGEU) instrument. We are undertaking this project in collaboration with the National Institute of Oncology – Maria Skłodowska-Curie Memorial – State Research Institute in Gliwice.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_13  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>References:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_14  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><a href="https://graylight-imaging.com/blog/autorecist-project-supported-by-medical-research-agency/ ">https://graylight-imaging.com/blog/autorecist-project-supported-by-medical-research-agency/ </a></p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_0">
				
				
				
				
				<span class="et_pb_image_wrap "><img fetchpriority="high" decoding="async" width="2028" height="441" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka.png" alt="" title="ABM belka" srcset="https://www.graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka.png 2028w, https://graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-300x65.png 300w, https://graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-1024x223.png 1024w, https://www.graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-768x167.png 768w, https://www.graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-1536x334.png 1536w, https://www.graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-120x26.png 120w, https://graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-1920x418.png 1920w, https://graylight-imaging.com/wp-content/uploads/2026/03/ABM-belka-1080x235.png 1080w" sizes="(max-width: 2028px) 100vw, 2028px" class="wp-image-285181" /></span>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/autorecist-project-progress-on-the-groundbreaking-software-project/">AutoRECIST project – progress on the groundbreaking software</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://graylight-imaging.com/blog/autorecist-project-progress-on-the-groundbreaking-software-project/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Are the Stages of Medical Software Development?</title>
		<link>https://graylight-imaging.com/blog/what-are-the-stages-of-medical-software-development/</link>
		
		<dc:creator><![CDATA[Norbert Podgorski]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 14:26:13 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Medical software development]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Custom medical software development]]></category>
		<category><![CDATA[Medical image segmentation]]></category>
		<guid isPermaLink="false">https://origin.graylight-imaging.com/?p=285136</guid>

					<description><![CDATA[<p>Medical software development follows the structured, which includes planning, requirements analysis, design, implementation, verification, validation, and ongoing maintenance. </p>
<p>The post <a href="https://graylight-imaging.com/blog/what-are-the-stages-of-medical-software-development/">What Are the Stages of Medical Software Development?</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_1 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_1">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_1  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_15  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The <a href="https://graylight-imaging.com/services/medical-software-development/">development of medical software</a> involves several fundamental stages. According to the IEC 62304 standard, the software life cycle includes planning, requirements analysis, design, implementation, verification, validation, and system maintenance. From a developer&#8217;s perspective, the implementation phase can be further divided into preprocessing, neural network implementation, and postprocessing.</p>
<p>In addition, the standard defines processes for risk management, configuration management, and problem resolution. This means that medical software development is not limited to just coding and testing. Every stage must be thoroughly documented and aligned with safety and regulatory requirements.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_16  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>Requirements Analysis and System Architecture Design</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_17  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>In medical software development, requirements analysis should first clearly define the purpose of the system. In the case of medical image segmentation, this means specifying which diagnostic problem the system is intended to solve. Examples include automatic segmentation of tumors in MRI scans (as shown in Figure 1) or delineation of anatomical structures in CT images.</p>
<p>It is also important to define the expected level of accuracy and acceptable error tolerance. Furthermore, the requirements must consider how the results will be presented to the system’s end users, such as clinicians. This ensures that subsequent design decisions are aligned with the system’s real clinical application.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_1">
				
				
				
				
				<span class="et_pb_image_wrap "><img decoding="async" src="https://www.graylight-imaging.com/wp-content/uploads/2026/03/Examples-of-segmentation-results.png" alt="" title="Examples of segmentation results" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_18  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Figure 1. Examples of segmentation results for meningioma, glioma, and pituitary tumors, respectively. Figure sourced from [1].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_19  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>Preprocessing</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_20  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Before training a model, medical data must be properly prepared. Preprocessing includes operations that enhance data quality and standardize input, making it suitable for subsequent stages. A common operation in semantic segmentation preprocessing is z-score normalization. This involves subtracting the mean intensity of the image voxels and dividing by the standard deviation, transforming the data to have a mean of 0 and a standard deviation of 1. This approach reduces the impact of differences between training datasets and facilitates model generalization to data from new sources.</p>
<p>Another key component of preprocessing is data augmentation, which increases the diversity of the training set. In medical segmentation, common augmentations include geometric transformations such as rotations, scaling, and flipping. Intensity augmentations are also frequently applied, including contrast adjustment, brightness modification, and noise simulation.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_2">
				
				
				
				
				<span class="et_pb_image_wrap "><img decoding="async" src="https://www.graylight-imaging.com/wp-content/uploads/2026/03/Examples-of-augmented-medical-images.png" alt="" title="Examples of augmented medical images" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_21  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Figure 2. Examples of augmented medical images. Figure sourced from [2].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_22  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>Neural Network Implementation and Training</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_23  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>During the implementation phase, the architecture of the neural network is coded and the model training procedure is defined. In practice, libraries such as PyTorch or TensorFlow are used, providing flexible tools to build deep learning models. Increasingly, architectures are not designed from scratch; instead, predefined and validated frameworks are applied. A good example is nnU-Net [3], a framework specifically developed for medical image segmentation. nnU-Net automatically adapts the network architecture, training parameters, and preprocessing steps to a given dataset. This reduces the need for manual hyperparameter tuning and lowers the risk of implementation errors. A competitive alternative to nnU-Net is U-Mamba [4], which has been described in one of our previous posts: <a href="https://graylight-imaging.com/blog/u-mamba-rising/">Mamba Rising: Are State Space Models like U-Mamba Going to Replace Ordinary U-Net?</a></p>
<p>In medical software development, neural network training often relies on cloud infrastructure, such as AWS, to meet high computational and memory demands. The cloud allows flexible allocation of GPU resources on demand, scaling resources according to dataset size, and parallel training of multiple models. This makes the training process faster and more reproducible.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_24  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Postprocessing</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_25  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>After obtaining the raw outputs from a segmentation model, postprocessing is performed to improve the quality and clinical reliability of the masks. Models such as nnU-Net typically return class probability maps for each voxel in float format (e.g., 0–1), where the value indicates the likelihood of belonging to the target class. To obtain the final binary mask, these maps are subjected to thresholding, which converts the probability map into a discrete mask. A common threshold is 0.5, meaning a voxel is assigned to the positive class if its probability exceeds this value. In practice, the threshold can be optimized for a specific task.</p>
<p>After thresholding, morphological spatial operations are often applied to improve the continuity of anatomical structures and remove artifacts. Examples include removing isolated small regions and filling holes within larger segments. Operations such as opening and closing help smooth edges and stabilize the masks.</p>
<p>More advanced approaches use unsupervised postprocessing networks, such as autoencoder models trained to reconstruct anatomically plausible masks from raw predictions. These methods learn the space of valid masks and project the segmentations back into that space, improving both visual quality and anatomical consistency. An example of such a method is described in [5].</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_3">
				
				
				
				
				<span class="et_pb_image_wrap "><img decoding="async" src="https://www.graylight-imaging.com/wp-content/uploads/2026/03/Example-of-using-an-autoencoder-network-in-postprocessing.png" alt="" title="Example of using an autoencoder network in postprocessing." /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_26  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Figure 3. Example of using an autoencoder network in postprocessing. Figure sourced from [5].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_27  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Deployment and System Maintenance</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_28  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The final deployment of medical software involves installing the application for end users, integrating it with existing systems, and monitoring its performance in the production environment. It is essential to provide a scalable and secure environment and implement mechanisms for automated updates. After deployment, the system requires continuous maintenance – technical support includes tracking bugs, responding to failures, and collecting user feedback. In summary, deployment is not the end of the process. The system must be continuously maintained and monitored for both effectiveness and regulatory compliance. In practice, this means constantly adapting and improving the software to meet clinical and technical requirements over the long term.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_29  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Resources</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_30  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><strong>[1]</strong> Francisco Javier Díaz-Pernas, Mario Martínez-Zarzuela , Míriam Antón-Rodríguez, David González-Ortega. (2021). A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network.</p>
<p><strong>[2]</strong> Zhaoshan Liua, Qiujie Lvb, Yifan Lia, Ziduo Yanga,c, Lei Shena. (2024). MedAugment: Universal Automatic Data Augmentation Plug-in for Medical Image Analysis.</p>
<p><strong>[3]</strong> Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F. Jaeger, Simon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, Klaus H. Maier-Hein. (2018). nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation.</p>
<p><strong>[4]</strong> Jun Ma, Feifei Li, Bo Wang. (2024). U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation.</p>
<p><strong>[5]</strong> Agostina J. Larrazabal, Cesar Martinez, Enzo Ferrante. (2019). Anatomical Priors for Image Segmentation via Post-Processing with Denoising Autoencoders.</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/what-are-the-stages-of-medical-software-development/">What Are the Stages of Medical Software Development?</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Voxels to Performance: Understanding Semantic Segmentation Metrics</title>
		<link>https://graylight-imaging.com/blog/from-voxels-to-performance-understanding-semantic-segmentation-metrics/</link>
		
		<dc:creator><![CDATA[Maria Bancerek]]></dc:creator>
		<pubDate>Mon, 02 Feb 2026 07:52:46 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Medical image analysis]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical AI]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<guid isPermaLink="false">https://origin.graylight-imaging.com/?p=285040</guid>

					<description><![CDATA[<p>Semantic segmentation of medical images is a key AI application in healthcare, requiring careful evaluation to ensure patient safety.</p>
<p>The post <a href="https://graylight-imaging.com/blog/from-voxels-to-performance-understanding-semantic-segmentation-metrics/">From Voxels to Performance: Understanding Semantic Segmentation Metrics</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_2 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_2">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_2  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_31  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Artificial intelligence tools become increasingly integrated into healthcare, with the semantic segmentation of medical images emerging as one of the most promising applications. A wide range of metrics can be used to evaluate the performance of semantic segmentation models, with each one highlighting different aspects of the models performance and its limitations. The proper choice of complementary metrics is crucial for the optimal model selection and for understanding its limitations. This is especially important in the healthcare sector, where errors can directly impact patient outcomes.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_32  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>Confusion Matrix Based methods</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_33  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Classical metrics derived from the confusion matrix with its four components: true positives (TP), false positives (FP), false negatives (FN), true negatives (TN) remain applicable for evaluating semantic segmentation models. Each voxel falls into one of four categories (TP, FP, FN, or TN) and metrics such as:</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_4">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="529" height="313" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/01/equation-accuracy-precision-recall.png" alt="" title="equation - accuracy, precision, recall" srcset="https://graylight-imaging.com/wp-content/uploads/2026/01/equation-accuracy-precision-recall.png 529w, https://graylight-imaging.com/wp-content/uploads/2026/01/equation-accuracy-precision-recall-300x178.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/equation-accuracy-precision-recall-120x71.png 120w" sizes="(max-width: 529px) 100vw, 529px" class="wp-image-285045" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_34  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>can be computed. The most straightforward one, accuracy, measures the fraction of voxels that are classified correctly. In medical imaging, however, lesions or regions of interest often occupy only a small fraction of the image, making accuracy a misleading measure [1]. On the other hand, precision and recall focus on the foreground voxels: precision measures the fraction of found voxels which are truly relevant, while recall measures the fraction of relevant voxels that are successfully detected.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_35  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>Dice Similarity Coefficient (DSC)</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_36  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>One of the most commonly reported metrics for semantic segmentation is the Dice Similarity Coefficient (DSC). It can be derived from the confusion matrix as well and is defined as:</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_5">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="729" height="117" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/01/equotation-DSC.png" alt="" title="equotation - DSC" srcset="https://www.graylight-imaging.com/wp-content/uploads/2026/01/equotation-DSC.png 729w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/equotation-DSC-300x48.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/equotation-DSC-120x19.png 120w" sizes="(max-width: 729px) 100vw, 729px" class="wp-image-285050" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_37  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The DSC measures the overlap between the predicted and ground truth voxels. It accounts for both under- and over-segmentation errors (Fig. 1) and can be expressed as the harmonic mean of precision and recall, maximized when both are high:</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_6">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="504" height="142" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/01/equatation-DSC-2-precision-and-recall.png" alt="" title="equatation - DSC - 2 precision and recall" srcset="https://www.graylight-imaging.com/wp-content/uploads/2026/01/equatation-DSC-2-precision-and-recall.png 504w, https://graylight-imaging.com/wp-content/uploads/2026/01/equatation-DSC-2-precision-and-recall-300x85.png 300w, https://graylight-imaging.com/wp-content/uploads/2026/01/equatation-DSC-2-precision-and-recall-120x34.png 120w" sizes="(max-width: 504px) 100vw, 504px" class="wp-image-285054" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_38  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>DSC is sensitive towards the volume of the foreground, penalizing similar errors more heavily in smaller structures (Fig. 1). DSC is an example of an overlap-based metric, alongside others such as the Intersection over Union (IoU) and Volume Overlap Error (VOE).</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_7">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="605" height="140" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/01/Fig.-1-Dice-Similarity-Coefficient-DSC-for-over-and-under-segmentation-cases-and-varying-ground-truth-size.png" alt="" title="Fig. 1 Dice Similarity Coefficient (DSC) for over- and under-segmentation cases and varying ground truth size." srcset="https://graylight-imaging.com/wp-content/uploads/2026/01/Fig.-1-Dice-Similarity-Coefficient-DSC-for-over-and-under-segmentation-cases-and-varying-ground-truth-size.png 605w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/Fig.-1-Dice-Similarity-Coefficient-DSC-for-over-and-under-segmentation-cases-and-varying-ground-truth-size-300x69.png 300w, https://graylight-imaging.com/wp-content/uploads/2026/01/Fig.-1-Dice-Similarity-Coefficient-DSC-for-over-and-under-segmentation-cases-and-varying-ground-truth-size-120x28.png 120w" sizes="(max-width: 605px) 100vw, 605px" class="wp-image-285059" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_39  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Fig. 1: Dice Similarity Coefficient (DSC) for over- and under-segmentation cases and varying ground truth size. </em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_40  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>Voxel-level vs object-level analysis</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_41  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Voxel-level metrics are intuitive and straightforward to compute. However, they can introduce dataset-specific biases: for instance, when segmenting multiple objects of varying sizes, larger structures may dominate the results [2]. In such cases, object-level metrics can allow for a fine-grained analysis of segmentation performance.</p>
<p>Per-object metrics are calculated by comparing each ground truth object with its corresponding predicted object; typically selecting the predicted object with the highest overlap when multiple predictions match the same ground truth object. These pairwise metrics can then be used to derive binary detection metrics in which each whole object is classified as a true positive, false positive or false negative, commonly by considering predictions that exceed a predefined overlap threshold as true positives. [4]</p>
<p>As illustrated by an example last year’s BraTS-METs challenge (Fig. 2), object-level analysis can reveal missed lesions in the otherwise accurate brain metastates segmentation (voxel-wise recall: 0.81, lesion-wise recall: 0.46). To find out more about our contribution to the challenge, see blogpost <a href="https://graylight-imaging.com/blog/brats-2025-another-challenge-success-for-the-graylight-imaging-team/">BraTS 2025 – Another Challenge Success for the Graylight Imaging Team</a>.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_8">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="605" height="474" src="https://origin.graylight-imaging.com/wp-content/uploads/2026/01/Fig.-2-Exemplary-segmentation-of-brain-metastases-using-a-model-trained-for-BraTS-METs-2025-challenge-with-corresponding-metrics-data-from-challenge-training.png" alt="" title="Fig. 2 Exemplary segmentation of brain metastases using a model trained for BraTS-METs 2025 challenge, with corresponding metrics; data from challenge training" srcset="https://graylight-imaging.com/wp-content/uploads/2026/01/Fig.-2-Exemplary-segmentation-of-brain-metastases-using-a-model-trained-for-BraTS-METs-2025-challenge-with-corresponding-metrics-data-from-challenge-training.png 605w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/Fig.-2-Exemplary-segmentation-of-brain-metastases-using-a-model-trained-for-BraTS-METs-2025-challenge-with-corresponding-metrics-data-from-challenge-training-300x235.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2026/01/Fig.-2-Exemplary-segmentation-of-brain-metastases-using-a-model-trained-for-BraTS-METs-2025-challenge-with-corresponding-metrics-data-from-challenge-training-120x94.png 120w" sizes="(max-width: 605px) 100vw, 605px" class="wp-image-285067" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_42  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Fig. 2 Exemplary segmentation of brain metastases using a model trained for BraTS-METs 2025 challenge, with corresponding metrics; data from challenge training dataset [5].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_43  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Boundary-based metrics</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_44  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Metrics discussed above all depend on the count of voxels overlapping between prediction and ground truth, but are unaware of shape and smoothness of the segmentations. Therefore it is recommended to pair overlap-based metrics [3] with complementary boundary-based metrics [3], which focus on spatial alignment and distances between predicted and reference object contours. Examples include Normalized Surface Distance (overlap of boundary voxels within the accepted distance) or Hausdorff Distance (maximum distance between points from prediction and reference boundaries) [2].</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_45  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>References</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_46  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>[1] Müller D, et al. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res Notes. 2022;15:210.</p>
<p>[2] Reinke A, et al. Understanding metric-related pitfalls in image analysis validation. Nat Methods. 2024 Feb;21(2):182–194.</p>
<p>[3] Kocak B, et al. Evaluation metrics in medical imaging AI: fundamentals, pitfalls, misapplications, and recommendations. Eur J Radiol Artif Intell. 2025;3.</p>
<p>[4] Machura B, et al. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph. 2024;116.</p>
<p>[5] Maleki N, et al. Analysis of the MICCAI Brain Tumor Segmentation &#8212; Metastases (BraTS-METS) 2025 Lighthouse Challenge: brain metastasis segmentation on pre- and post-treatment MRI. arXiv. 2025. doi:10.48550/arXiv.2504.12527.</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/from-voxels-to-performance-understanding-semantic-segmentation-metrics/">From Voxels to Performance: Understanding Semantic Segmentation Metrics</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Graylight Imaging’s relationships and shared values: preparing for the holidays</title>
		<link>https://graylight-imaging.com/blog/graylight-imagings-relationships-and-shared-values-preparing-for-the-holidays/</link>
		
		<dc:creator><![CDATA[Agnieszka Klich-Dubik]]></dc:creator>
		<pubDate>Fri, 19 Dec 2025 14:22:59 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Graylight Imaging's team]]></category>
		<guid isPermaLink="false">https://origin.graylight-imaging.com/?p=284874</guid>

					<description><![CDATA[<p>At Graylight Imaging, the pre-Christmas period is a special time. Read about our charity campaign, customer relations, and team-building.</p>
<p>The post <a href="https://graylight-imaging.com/blog/graylight-imagings-relationships-and-shared-values-preparing-for-the-holidays/">Graylight Imaging’s relationships and shared values: preparing for the holidays</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_3 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_3">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_3  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_47  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The festive season naturally encourages us to slow down, reflect and also look beyond our daily routine. It is a time for sharing, giving, and strengthening the <a href="https://graylight-imaging.com/">Graylight Imaging</a>&#8216;s relationships built throughout the year. For us, it is a time for reflection and to make a positive impact on the world through small gestures.</p>
<p>In this post, we want to share with you information about our pre-Christmas initiatives. Remembering that actual value is created when people work together and build valuable relationships by meeting not only in the context of their professional duties.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_9">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="1481" height="1022" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025.png" alt="" title="Xparty 2025" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025.png 1481w, https://graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025-300x207.png 300w, https://graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025-1024x707.png 1024w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025-768x530.png 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025-120x83.png 120w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Xparty-2025-1080x745.png 1080w" sizes="(max-width: 1481px) 100vw, 1481px" class="wp-image-284877" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_48  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>Graylight Imaging&#8217;s relationships and sharing the spirit of giving</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_49  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Every year, the Graylight Imaging team participates in charitable initiatives that support those most in need. This holiday season, our philanthropic activities were directed towards <a href="https://www.domydziecka.org/placowka,2564.html" target="_blank" rel="noopener">the ‘Sośnickie Słoneczka’ Family Children&#8217;s Home</a>.</p>
<p>The campaign was coordinated by Ewelina Działek and Patrycja Rewa, who led the initiative from start to finish, engaging employees, organising an internal collection, and ensuring that each gift was carefully prepared. They were supported throughout the process by Damian Sowa and Tomasz Korab.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_10">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="2560" height="1928" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-scaled.jpg" alt="" title="charity activities" srcset="https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-scaled.jpg 2560w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-300x226.jpg 300w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-1024x771.jpg 1024w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-768x578.jpg 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-1536x1157.jpg 1536w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-2048x1542.jpg 2048w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-120x90.jpg 120w, https://graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-1920x1446.jpg 1920w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/charity-activities-1080x813.jpg 1080w" sizes="(max-width: 2560px) 100vw, 2560px" class="wp-image-284878" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_50  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>An annual initiative – now a tradition</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_51  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>This annual initiative is more than just a one-off event – it reflects our shared values. It brings us closer together as a team, strengthens our cooperation and also reminds us how powerful collective work can be. We genuinely believe that even small acts of kindness can have a significant impact on the environment in which we live. By supporting local initiatives, we strive to cultivate a culture where generosity, empathy, and relationship-building are integral to our daily work.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_52  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>The importance of small gestures in business</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_53  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Just before Christmas, we had the pleasure of welcoming a client from Norway to our office. A client visit is always a big event for us. However, this visit became even more special thanks to a nice and unexpected gift. The delicious Norwegian sweets our guests brought us put us in a festive mood and brought many smiles to the team.</p>
<p>Such gestures are significant. They remind us that cooperation is not just about projects and achieving goals. Above all, it is about building trust, relationships and mutual respect. We are grateful for this kind gesture and greatly value our cooperation. We look forward to the future with enthusiasm.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_11">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="2560" height="1183" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/Norway-scaled.jpg" alt="" title="Norway" srcset="https://graylight-imaging.com/wp-content/uploads/2025/12/Norway-scaled.jpg 2560w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Norway-300x139.jpg 300w, https://graylight-imaging.com/wp-content/uploads/2025/12/Norway-1024x473.jpg 1024w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Norway-768x355.jpg 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/Norway-1536x710.jpg 1536w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Norway-2048x946.jpg 2048w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Norway-120x55.jpg 120w, https://graylight-imaging.com/wp-content/uploads/2025/12/Norway-1920x887.jpg 1920w, https://graylight-imaging.com/wp-content/uploads/2025/12/Norway-1080x499.jpg 1080w" sizes="(max-width: 2560px) 100vw, 2560px" class="wp-image-284881" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_54  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Our December meeting</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_55  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>We believe that preparations for Christmas would not be complete without our annual company meeting. It is the perfect moment to pause and reflect on the past year from a broader perspective. The Graylight Imaging team&#8217;s Christmas party was an opportunity for us to reflect on the intense months and spend time together in a warm, festive atmosphere — away from projects and daily responsibilities.</p>
<p>Moreover, such meetings remind us that the company and the projects we carry out are primarily about people. We want to thank everyone for their commitment, energy and atmosphere of cooperation, which allows us to develop together and look forward to new challenges with optimism.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_12">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="2000" height="1500" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time.jpg" alt="" title="Graylight Imaging&#039;s Holiday time" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time.jpg 2000w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-300x225.jpg 300w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-1024x768.jpg 1024w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-768x576.jpg 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-1536x1152.jpg 1536w, https://graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-120x90.jpg 120w, https://graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-1920x1440.jpg 1920w, https://graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-510x382.jpg 510w, https://graylight-imaging.com/wp-content/uploads/2025/12/Graylight-Imagings-Holiday-time-1080x810.jpg 1080w" sizes="(max-width: 2000px) 100vw, 2000px" class="wp-image-284879" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_56  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Building our values</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_57  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>We work extremely hard year-round, but before the holidays, we intensify our activities even further. The Christmas activities we described in the post show that, for our team, the end of the year is more than a summary of completed projects and initiatives.</p>
<p>Above all, it is a time to strengthen relationships and share goodness through specific charitable activities. We believe that cooperation and small gestures build lasting values. With this approach, we close out the year and look forward to the challenges, projects, and activities that await us in the new year with optimism.</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/graylight-imagings-relationships-and-shared-values-preparing-for-the-holidays/">Graylight Imaging’s relationships and shared values: preparing for the holidays</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mamba Rising: Are State Space Models like U-Mamba Going to Replace Ordinary U-Net?</title>
		<link>https://graylight-imaging.com/blog/u-mamba-rising/</link>
		
		<dc:creator><![CDATA[Jacek Karolczak]]></dc:creator>
		<pubDate>Fri, 05 Dec 2025 13:55:53 +0000</pubDate>
				<category><![CDATA[AI in Healthcare]]></category>
		<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Medical image analysis]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Medical AI]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<category><![CDATA[Medical image segmentation]]></category>
		<guid isPermaLink="false">https://origin.graylight-imaging.com/?p=284766</guid>

					<description><![CDATA[<p>U-Mamba challenges U-Net’s long reign by adding efficient long-range reasoning through State Space Models — without the heavy cost of transformers.</p>
<p>The post <a href="https://graylight-imaging.com/blog/u-mamba-rising/">Mamba Rising: Are State Space Models like U-Mamba Going to Replace Ordinary U-Net?</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
<div class="et_builder_inner_content et_pb_gutters3">
		<div class="et_pb_section et_pb_section_4 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_4">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_4  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_58  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The U-Net has been the gold standard for <a href="https://graylight-imaging.com/technology/medical-image-analysis/">medical image semantic segmentation</a> for years. While its elegant architecture remains effective, its success has led to an observable plateau. In fact, the nnU-Net framework [1], which represents the current state-of-the-art approach for training the U-Net, consistently achieves extremely high performance, setting a difficult benchmark for competitors.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_59  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>The Problems with U-Net Successors</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_60  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Many new models are merely incremental variations of a classic U-Net, like UNet++ [2] or Attention U-Net [3], offering only marginal gains in specific tasks. Furthermore, new architectures often do not make the grade when tested in emerging works. Simple enhancements to the U-Net&#8217;s training regime frequently allow it to achieve the same improvements claimed by these supposedly superior models. On the other hand, transformer-based approaches like UNETR++ [4] are notoriously resource-intensive, requiring large data volumes and expensive GPUs with substantial VRAM. Critically, they also share the same fundamental issue as CNNs: they often fail to demonstrate lasting superiority over improved U-Net baselines in subsequent testing. This highlights a clear need: the field requires a novel architecture that offers a significant leap in performance and is not dependent on large data volumes. This is precisely the gap that U-Mamba [5] aims to fill.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_61  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>What are State Space Models (SSMs)?</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_62  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The search for efficiency in deep learning has led to the exploration of new architectural paradigms like State Space Models (SSMs) [6]. The motivation for SSMs comes from a key weakness in transformers: their self-attention mechanism scales quadratically. This means that as an image or sequence gets larger, the computation required explodes. SSMs are an entirely different approach. The promise of SSMs is linear scaling. They are designed to process long sequences of data with incredible efficiency. To put this in perspective: Imagine you want to double your patch size to capture more context. For a transformer, this 2x increase in input size causes the memory and compute cost to surge by 4x (2<sup>2</sup>). In contrast, for U-Mamba, the cost simply doubles.</p>
<p>SSMs originate from control theory and the mathematical modelling of physical systems, often using linear time-invariant (LTI) concepts. Unlike transformers, which calculate global interactions through parallel attention weights, SSMs process information sequentially. At their core, they maintain a compressed &#8220;state&#8221; of all the information seen so far, updating it with each new piece of data. The key innovation in Mamba is a selection mechanism. This &#8220;gate&#8221; intelligently decides what information to keep in its state and what to forget. This reliance on a hidden state, rather than a global attention matrix, is the fundamental difference from the transformer architecture. Crucially, unlike recurrent neural networks, SSMs are structured such that the state computation can be transformed into a convolution, enabling massive parallel processing during training.</p>
<p>In a medical context, this is powerful. We can treat an image as a long sequence of pixels. An SSM can &#8220;see&#8221; a pixel in the top-left corner and, thousands of pixels later, remember its context when analysing the bottom-right corner. This allows it to model long-range dependencies. For a radiologist, this is like understanding the subtle relationship between a small lesion in one lobe of the lung and a faint pattern in another. This global understanding is something traditional U-Nets can struggle with.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_63  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>U-Mamba architecture</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_64  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The U-Mamba architecture is a hybrid model, and its motivation is very clever: don&#8217;t reinvent the wheel, just improve it. The U-Net is brilliant at learning local features – edges, textures, and small shapes. Its skip-connection structure is unmatched for preserving fine-grained spatial detail. U-Mamba does not throw this away. Instead, it augments the U-Net by replacing some standard convolutional blocks with the new U-Mamba Block.</p>
<p>The promise here is to get the best of both worlds. As seen in the architecture diagram, the model uses the familiar U-Net encoder-decoder skeleton. The convolutional layers capture the &#8220;what&#8221;, e.g., &#8220;this texture looks like a cell&#8221;. The U-Mamba blocks, integrated within, provide the &#8220;where&#8221; and &#8220;why&#8221;, e.g., this cell&#8217;s relationship to the entire tissue structure suggests it&#8217;s anomalous. It is a synergistic design where convolutions handle the local details, and Mamba handles the global context. This allows the model to &#8220;think&#8221; more like an expert, using broad context to inform local decisions.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_13">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="1379" height="798" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising.png" alt="" title="mamba rising" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising.png 1379w, https://graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising-300x174.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising-1024x593.png 1024w, https://graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising-768x444.png 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising-120x69.png 120w, https://graylight-imaging.com/wp-content/uploads/2025/12/mamba-rising-1080x625.png 1080w" sizes="(max-width: 1379px) 100vw, 1379px" class="wp-image-284779" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_65  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Figure 1. Overview of the key features of the U-Mamba architecture. (a) U-Mamba Building Block: This block is composed of two successive residual blocks followed by the Mamba block, which is included to enhance long-range dependencies. (b) U-Mamba Encoder Architecture: This illustrates the overall architecture of the U-Mamba Encoder configuration, where the U-Mamba block is included in all blocks building the encoder. Figure sourced from [5].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_66  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>To achieve this, the U-Mamba focuses on three key functional components:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_67  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><ul class="list-bottom">
<li>Sequential modelling: The core State Space Model within the U-Mamba Block is responsible for processing data as a sequence, efficiently modelling long-range dependencies within the feature map.</li>
<li>Dynamic gating: The block utilizes a complex gating mechanism (including a linear layer, Sigmoid Linear Unit (SiLU) activation, and multiplication) to dynamically modulate the flow of information based on the input.</li>
<li>Spatial integrity: The overall U-Net structure retains its defining features: strided convolutions in the encoder for efficient downsampling and transposed convolution layers in the decoder for precise upsampling. Crucially, the skip connections are retained to ensure that fine, high-resolution details are passed directly across the model.</li>
</ul></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_68  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>U-Mamba can be run in two configurations. The first, where the U-Mamba block is used solely in the bottleneck, is called U-Mamba Bottleneck. The second, where the U-Mamba block is used in all encoder blocks, is called U-Mamba Encoder. While U-Mamba Bottleneck performs slightly worse on some tasks, the benefit of including the U-Mamba block in the bottleneck only is that it is significantly less memory intensive.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_69  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>U-Mamba: A Plug-and-Play Solution</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_70  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Beyond its novel architecture, one of U-Mamba&#8217;s greatest features is its practicality. It is not just a theoretical paper; it is designed to be plug-n-play. The installation is straightforward and involves only a few simple `pip install` commands. The decision to build it directly on the nnU-Net framework was a critical one. This ensures a fair comparison against the reigning standard. For researchers, this is invaluable. It means you are testing the architecture itself, not just a specific, lucky implementation of training regime. Because U-Mamba uses the same data format as nnU-Net, teams can skip the time-consuming preprocessing step if it was already performed. This low-friction setup allows teams to leverage their existing nnU-Net pipelines, using the same data formats and preprocessing steps, which drastically lowers the barrier to adoption. You may, in the worst case, only need to modify the `plans.json` file.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_71  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Hands-On Experience</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_72  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Our internal experiments provide a practical, hands-on verdict. The answer to the question in the title is clear: U-Mamba is not yet going to replace U-Net, but it is the most viable challenger we have seen to date. We found that U-Mamba is not a solution that always prove to be superior U-Net on all tasks. However, it demonstrated significant advantages in specific, challenging areas. For instance, it performed exceptionally well when segmenting very small objects, likely because it could use global context to find them. In abdominal organ segmentation (see Fig. 2), U-Mamba demonstrated more accurate segmentation masks for the liver and stomach in CT scans, and the gallbladder in MRI scans, relative to established U-Net and Transformer baselines. Furthermore, U-Mamba training dynamics were impressive. U-Mamba often achieved good, stable segmentation earlier in the training process and did not require as much heavy-hand tuning (like custom loss functions or massive patch sizes) that nnU-Net sometimes needs to reach its peak. The promises are largely fulfilled. It has a clear trade-off: it requires more VRAM than U-Net (U-Mamba Bottleneck about 2x), but it is still far more accessible than a full transformer model. It represents a powerful, new tool in our developer toolbox.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_14">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="1379" height="813" src="https://origin.graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba.png" alt="" title="semparison of segmentations examples for U-Mamba" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba.png 1379w, https://graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba-300x177.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba-1024x604.png 1024w, https://www.graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba-768x453.png 768w, https://graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba-120x71.png 120w, https://graylight-imaging.com/wp-content/uploads/2025/12/semparison-of-segmentations-examples-for-U-Mamba-1080x637.png 1080w" sizes="(max-width: 1379px) 100vw, 1379px" class="wp-image-284809" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_73  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><em>Figure 2. Comparison of segmentations examples for U-Mamba and predecessor models. This figure provides visual results for abdominal organ segmentation on CT (1st and 2nd rows) and MRI scans (3rd and 4th rows). Figure sourced from [5].</em></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_74  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Resources:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_75  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><strong>[1]</strong> Isensee, F. et al. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (pp. 488–498). Springer Nature Switzerland.</p>
<p><strong>[2]</strong> Zhou, Z., Siddiquee, M., Tajbakhsh, N., &amp; Liang, J. (2019). UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Transactions on Medical Imaging.</p>
<p><strong>[3]</strong> Oktay, O. et al. (2018). Attention U-Net: Learning Where to Look for the Pancreas. arXiv preprint arXiv:1804.03999.</p>
<p><strong>[4]</strong> Shaker, A., Maaz, M., Rasheed, H., Khan, S., Yang, M.H., &amp; Shahbaz Khan, F. (2024). UNETR++: Delving Into Efficient and Accurate 3D Medical Image Segmentation. IEEE Transactions on Medical Imaging, 43(9), 3377-3390.</p>
<p><strong>[5]</strong> Ma, J., Li, F., &amp; Wang, B. (2024). U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation. arXiv preprint arXiv:2401.04722.</p>
<p><strong>[6]</strong> Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., &amp; Re, C. (2021). Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers.</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>
		</p></div>
</p></div>
<p>The post <a href="https://graylight-imaging.com/blog/u-mamba-rising/">Mamba Rising: Are State Space Models like U-Mamba Going to Replace Ordinary U-Net?</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>BraTS 2025 – Another Challenge Success for the Graylight Imaging Team</title>
		<link>https://graylight-imaging.com/blog/brats-2025-another-challenge-success-for-the-graylight-imaging-team/</link>
		
		<dc:creator><![CDATA[Agnieszka Klich-Dubik]]></dc:creator>
		<pubDate>Fri, 10 Oct 2025 10:27:18 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Medical image analysis]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<category><![CDATA[Medical image segmentation]]></category>
		<guid isPermaLink="false">https://graylight-imaging.com/?p=284687</guid>

					<description><![CDATA[<p>BraTS is a prestigious competition focused on AI-based medical image analysis. This year, the Graylight Imaging team took first place in the Brain Metastasis Segmentation. </p>
<p>The post <a href="https://graylight-imaging.com/blog/brats-2025-another-challenge-success-for-the-graylight-imaging-team/">BraTS 2025 – Another Challenge Success for the Graylight Imaging Team</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_5 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_5">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_5  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_76  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The Graylight Imaging team won the first place in this year’s prestigious Brain Tumor Segmentation (BraTS 2025) Challenge. The results were announced during the 28th International Conference on Medical Image Computing and Computer-Assisted Intervention, held in South Korea. <strong>Maria Bancerek, Piotr Rudzki, and Jakub Nalepa</strong> represented Graylight Imaging.</p>
<p>Congratulations to the entire team on this outstanding achievement! We are proud and honored to have such top-tier professionals representing us on the international stage.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_77  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>BraTS 2025 Challenge – A Global Benchmark in AI for Medical Imaging</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_78  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The BraTS Challenge is one of the most important and recognizable events in the field of AI-based <a href="https://graylight-imaging.com/technology/medical-image-analysis/">medical image analysis</a>. It has a long-standing tradition, with the first edition taking place in 2012.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_79  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>BraTS Lighthouse – An Expanded Challenge Format</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_80  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>This year, the organizers held the competition in a newly expanded format called BraTS Lighthouse, introducing additional tasks beyond the traditional focus on brain tumor segmentation. They also challenged participants with clinically relevant tasks such as assessing lesion progression and segmenting metastatic lesions.</p>
<p>Our team took on the Brain Metastasis Segmentation task and achieved the best result in this category.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_81  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: left;"><em>We have been working on the Graylight Imaging technology for detecting and delineating a variety of tumors and lesions for years already, and BraTS has been always an “ultimate benchmark” for us. Detecting and tracking brain metastases is an extremely difficult and important clinical task, and we’re so happy to push forward what’s possible with AI. We strongly believe that the thoroughly validated AI models will change the way we see &amp; treat the patients</em> &#8211; comments Jakub Nalepa, Ph.D., D.Sc. from Graylight Imaging.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_15">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="768" height="432" src="https://www.graylight-imaging.com/wp-content/uploads/2025/10/brats-2025.png" alt="The quotation from blog post about BraTS 2025: &quot;We have been working on the Graylight Imaging technology for detecting and delineating a variety of tumors and lesions for years already, and BraTS has been always an “ultimate benchmark” for us.&quot;" title="brats 2025" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/10/brats-2025.png 768w, https://www.graylight-imaging.com/wp-content/uploads/2025/10/brats-2025-300x169.png 300w, https://graylight-imaging.com/wp-content/uploads/2025/10/brats-2025-120x68.png 120w" sizes="(max-width: 768px) 100vw, 768px" class="wp-image-284720" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_82  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>This Year&#8217;s Edition</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_83  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>In the 2025 edition of BraTS Lighthouse, participants could select from 11 challenges. Each challenge focused on one of three areas: the (sub)region segmentation (SEG), image synthesis (SYN), or classification (CLASS).</p>
<p>The Graylight Imaging team participated in the Segmentation of Pre- and Post-Treatment Brain Metastases (SEG) challenge.</p>
<p>Monitoring brain metastases is both time-consuming and labor-intensive, especially when multiple lesions are involved and manual techniques are used. Typically, brain metastases are assessed by measuring their largest unidimensional diameter. However, accurately estimating the volume of both the lesions and surrounding oedema is crucial for informed clinical decision-making and improved treatment outcomes.</p>
<p>The team’s goal was to develop a robust, machine learning–based algorithm capable of accurately identifying brain metastases of varying sizes in both pre- and post-treatment MRI scans.</p>
<p>Thanks to this AutoML technology, automatic segmentation of brain metastases (with special emphasis put on enhancing tumor) and associated edema and resection cavity can save physicians valuable time while delivering consistent, reproducible results.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_84  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Dataset for Algorithm Development</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_85  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Participants worked with a dataset comprising pre- and post-treatment brain MRI scans collected from multiple institutions under real-world clinical conditions. Due to variations in equipment and imaging protocols, the dataset featured a wide range of image quality—reflecting the diversity of everyday clinical practice.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_86  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Our History with BraTS – A Quick Recap</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_87  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>This is not our first success in the BraTS competition. With nearly two decades of experience in medical image analysis, we’ve participated in several editions since our debut in 2017.</p>
<p>In 2021, our data scientists placed 6th (and 5th after the validation phase)—a major achievement that confirmed the world-class quality of our work.</p>
<p>We performed even better in 2022, when we entered two competitions: the Brain Tumor Segmentation Challenge 2022 and the Federated Tumor Segmentation Challenge 2022 (FeTS). In the latter, which required evaluating algorithm performance on out-of-sample data in a federated setup, our team secured an impressive second place. </p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_88  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Artificial Intelligence in the Service of Medicine</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_89  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>We strongly believe that innovative technologies play a key role in improving cancer care. AI can assist in detecting even the smallest brain lesions, supporting physicians in diagnosis and treatment planning.</p>
<p>Artificial intelligence in oncology enables the precise measurement of tumors and analysis of numerous tumor characteristics. While there’s still much work to be done, the development of AI-powered algorithms is already having a real, positive impact on treatment outcomes.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_90  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Resource:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_91  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>[1] BraTS: <a href="https://www.synapse.org/Synapse:syn64153130/wiki/630130" target="_blank" rel="noopener">https://www.synapse.org/Synapse:syn64153130/wiki/630130</a></p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/brats-2025-another-challenge-success-for-the-graylight-imaging-team/">BraTS 2025 – Another Challenge Success for the Graylight Imaging Team</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Domain Adaptation</title>
		<link>https://graylight-imaging.com/blog/domain-adaptation/</link>
		
		<dc:creator><![CDATA[Szymon Ligeza]]></dc:creator>
		<pubDate>Tue, 23 Sep 2025 13:21:58 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Medical image analysis]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical AI]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<guid isPermaLink="false">https://graylight-imaging.com/?p=284505</guid>

					<description><![CDATA[<p>Domain adaptation in medical imaging as a solution for improving the robustness and usefulness of AI models. Learn more. </p>
<p>The post <a href="https://graylight-imaging.com/blog/domain-adaptation/">Domain Adaptation</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
<div class="et_builder_inner_content et_pb_gutters3">
		<div class="et_pb_section et_pb_section_6 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_6">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_6  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_92  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><a href="https://graylight-imaging.com/services/medical-algorithms/">Medical imaging AI models</a> often face challenges when applied to data from different hospitals, devices, or patient populations. Domain adaptation in medical imaging offers solutions to improve model robustness and generalizability.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_93  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>What is Domain Adaptation in Medical Imaging?</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_94  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Domain adaptation means teaching an AI model trained on one dataset to perform well on another dataset that looks different. It is about reducing the gap between the &#8216;source domain&#8217; (training data) and the &#8216;target domain&#8217; (new data), so that the model can handle differences like scanner type, imaging protocol, or patient population.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_16">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="505" height="274" src="https://graylight-imaging.com/wp-content/uploads/2025/09/Domain-Adaptation.png" alt="" title="Domain Adaptation" srcset="https://graylight-imaging.com/wp-content/uploads/2025/09/Domain-Adaptation.png 505w, https://graylight-imaging.com/wp-content/uploads/2025/09/Domain-Adaptation-300x163.png 300w, https://graylight-imaging.com/wp-content/uploads/2025/09/Domain-Adaptation-120x65.png 120w" sizes="(max-width: 505px) 100vw, 505px" class="wp-image-284512" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_95  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><a href="https://www.researchgate.net/figure/llustration-of-the-domain-shift-phenomenon-Quinonero-Candela-et-al-2009-top-row_fig1_366891874" target="_blank" rel="noopener"><em>https://www.researchgate.net/figure/llustration-of-the-domain-shift-phenomenon-Quinonero-Candela-et-al-2009-top-row_fig1_366891874</em></a></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_96  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>Why it matters in medical imaging compared to natural image processing:</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_97  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Medical imaging data is far less standardized than natural images. Scanners, acquisition protocols, and patient demographics introduce variability that is much stronger than differences between everyday photographs. Without domain adaptation, AI models trained on one hospital’s images may not generalize to another, making it a more critical challenge than in typical computer vision tasks.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_98  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>Techniques for Domain Adaptation</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_99  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><table border="2" style="height: 600px; width: 70%; border-collapse: collapse; border-style: solid; border-color: #d62857; margin-left: auto; margin-right: auto;" cellpadding="10" cellspacing="40">
<tbody>
<tr style="height: 10px;">
<td style="width: 226.875px; height: 5px; vertical-align: middle; text-align: center;"><strong>Category</strong><strong></strong></td>
<td style="width: 226.875px; height: 5px; vertical-align: middle; text-align: center;">
<p><strong></strong></p>
<p style="text-align: center;"><strong>Approach</strong></p>
<p><strong></strong></p>
</td>
<td style="width: 227.569px; height: 5px; vertical-align: middle; text-align: center;">
<p><strong></strong></p>
<p><strong>How it Works</strong></p>
<p><strong></strong></p>
</td>
<td style="width: 226.208px; height: 5px; vertical-align: middle; text-align: center;">
<p><strong></strong></p>
<p><strong></strong></p>
<p><strong>Example</strong></p>
<p><strong></strong></p>
<p><strong></strong></p>
</td>
</tr>
<tr style="height: 10px;">
<td style="width: 226.875px; height: 10px; text-align: center;"><strong>Data-level</strong></td>
<td style="width: 226.875px; height: 10px; text-align: center;">Normalization &amp; augmentation</td>
<td style="width: 227.569px; height: 10px; text-align: center;">Adjust images or add variation to mimic new domains</td>
<td style="width: 226.208px; height: 10px; text-align: center;">MRI intensity normalization; noisy low-dose CT</td>
</tr>
<tr style="height: 72px;">
<td style="width: 226.875px; height: 72px; text-align: center;"></td>
<td style="width: 226.875px; height: 72px; text-align: center;"><span>Synthetic / style transfer</span><span></span><span></span></td>
<td style="width: 227.569px; height: 72px; text-align: center;">Generate or restyle data to match the target</td>
<td style="width: 226.208px; height: 72px; text-align: center;">CycleGAN for stain adaptation</td>
</tr>
<tr style="height: 10px;">
<td style="width: 226.875px; height: 10px; text-align: center;"><strong>Feature-level</strong></td>
<td style="width: 226.875px; height: 10px; text-align: center;">Domain-invariant features</td>
<td style="width: 227.569px; height: 10px; text-align: center;">Learn representations common across domains</td>
<td style="width: 226.208px; height: 10px; text-align: center;">Lung nodule shape independent of the scanner</td>
</tr>
<tr style="height: 72px;">
<td style="width: 226.875px; height: 72px; text-align: center;"></td>
<td style="width: 226.875px; height: 72px; text-align: center;"><span>Adversarial training (DANN)</span><span></span></td>
<td style="width: 227.569px; height: 72px; text-align: center;">Fool a domain classifier to align feature spaces</td>
<td style="width: 226.208px; height: 72px; text-align: center;"> MRI features aligned across vendors</td>
</tr>
<tr style="height: 10px;">
<td style="width: 226.875px; height: 10px; text-align: center;"><strong>Model-level</strong></td>
<td style="width: 226.875px; height: 10px; text-align: center;">Transfer &amp; self-supervision</td>
<td style="width: 227.569px; height: 10px; text-align: center;"> Fine-tune or pretrain on unlabeled data</td>
<td style="width: 226.208px; height: 10px; text-align: center;">Pretrained MRI model adapted to a new hospital</td>
</tr>
<tr style="height: 10px;">
<td style="width: 226.875px; height: 21px; text-align: center;"><strong>Advanced</strong></td>
<td style="width: 226.875px; height: 21px; text-align: center;">Federated / test-time adap.</td>
<td style="width: 227.569px; height: 21px; text-align: center;">Train across sites or adapt during inference</td>
<td style="width: 226.208px; height: 21px; text-align: center;">Federated lung nodule detection across  hospitals</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_100  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>Benefits of Domain Adaptation for Healthcare</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_101  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="padding-left: 40px;"><span style="color: #d62857;"><strong>1.</strong></span> Improved diagnostic accuracy across institutions — ensuring consistent results even when scanners or protocols differ.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>2.</strong></span> Better generalization to real-world clinical scenarios — models remain reliable in diverse hospital settings.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>3.</strong></span> Reduced need for costly re-annotation — less manual labeling required for new datasets.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>4.</strong></span> Faster deployment of AI solutions — models can be adapted quickly without complete retraining.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>5.</strong></span> Increased trust and adoption by clinicians — consistent performance builds confidence in AI tools.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>6.</strong></span> Support for rare or underrepresented cases — domain adaptation allows models to transfer knowledge to smaller or less common datasets.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_102  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Practical Applications and Case Studies</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_103  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Here are some of the most prominent real-world unsupervised domain adaptation applications in medical imaging:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_104  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="padding-left: 40px;"><span style="color: #d62857;"><strong>1.</strong></span> Brain lesion segmentation across MRI scanners — Kamnitsas et al. (2017) showed that adversarial networks can adapt models trained on one MRI scanner to another without using target labels.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>2.</strong></span> Histopathology stain adaptation — CycleGAN-based methods successfully adapted slides between different staining protocols, improving cancer detection without target annotations.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>3.</strong></span> Chest X-ray analysis — Unsupervised methods aligned datasets from different hospitals, reducing the domain gap caused by varied acquisition protocols.</p>
<p style="padding-left: 40px;"><span style="color: #d62857;"><strong>4.</strong></span> Lung nodule detection in CT — Domain adaptation improved performance on low-dose screening CTs when models were trained only on high-dose scans.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_105  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Example Methodology – Domain-Adversarial Neural Networks (DANN)</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_106  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>One of the most prominent examples of how domain adaptation can be applied is Domain-Adversarial Neural Networks (DANN). This approach becomes especially valuable when we do not have any labeled data in the target domain, which makes adaptation particularly challenging. DANN are a technique in machine learning that helps a model work well on data from different sources (domains).</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_107  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The key idea is simple:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_108  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><span style="text-decoration: underline;"><strong>The model lhas two jobs at once:</strong></span></p>
<ul>
<li>Learn to solve the main task (e.g., detect a tumor in an MRI).</li>
<li>Hide the “domain identity” of the data (e.g., whether the MRI came from Hospital A or Hospital B).</li>
</ul></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_109  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><span style="text-decoration: underline;"><strong>To do this, the network is trained with an adversary:</strong></span></p>
<ul>
<li>One part of the model tries to recognize which domain the data came from.</li>
<li>Another part tries to fool this domain classifier, by learning features that look the same no matter the source.</li>
</ul></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_17">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="402" height="184" src="https://graylight-imaging.com/wp-content/uploads/2025/09/Blog-post-about-Domain-Adaptation.png" alt="" title="Blog post about Domain Adaptation" srcset="https://graylight-imaging.com/wp-content/uploads/2025/09/Blog-post-about-Domain-Adaptation.png 402w, https://www.graylight-imaging.com/wp-content/uploads/2025/09/Blog-post-about-Domain-Adaptation-300x137.png 300w, https://graylight-imaging.com/wp-content/uploads/2025/09/Blog-post-about-Domain-Adaptation-120x55.png 120w, https://www.graylight-imaging.com/wp-content/uploads/2025/09/Blog-post-about-Domain-Adaptation-400x184.png 400w" sizes="(max-width: 402px) 100vw, 402px" class="wp-image-284580" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_110  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p style="text-align: center;"><a href="https://medium.com/@2017csm1006/unsupervised-domain-adaptation-by-backpropagation-da730a190fd2" target="_blank" rel="nofollow noopener">https://medium.com/@2017csm1006/unsupervised-domain-adaptation-by-backpropagation-da730a190fd2</a></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_111  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>As a result, the model ends up with domain-invariant features — representations that focus only on the medical problem (tumor vs. no tumor) and ignore irrelevant differences (scanner type, hospital, or patient demographics).</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_112  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>References:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_113  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><ol>
<li>Ghafoorian M. et al. (2017). Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation. IEEE Transactions on Medical Imaging.</li>
<li>Kermany D.S. et al. (2018). Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell.</li>
<li>Ganin Y., Lempitsky V. (2015). Unsupervised Domain Adaptation by Backpropagation. ICML.</li>
<li>Ganin Y. et al. (2016). Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research.</li>
<li>Sheller M.J. et al. (2020). Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports.</li>
<li>Kamnitsas K. et al. (2017). Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. IPMI.</li>
</ol></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>
		</p></div>
</p></div>
<p>The post <a href="https://graylight-imaging.com/blog/domain-adaptation/">Domain Adaptation</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial intelligence in medical imaging: from algorithm to diagnosis</title>
		<link>https://graylight-imaging.com/blog/artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis/</link>
		
		<dc:creator><![CDATA[Agnieszka Klich-Dubik]]></dc:creator>
		<pubDate>Fri, 08 Aug 2025 12:39:00 +0000</pubDate>
				<category><![CDATA[AI in Healthcare]]></category>
		<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical AI]]></category>
		<category><![CDATA[Medical algorithms]]></category>
		<guid isPermaLink="false">https://graylight-imaging.com/?p=284048</guid>

					<description><![CDATA[<p>Artificial intelligence has played an increasingly prominent role in medical imaging. But where did this journey begin? Let's explore its origins.</p>
<p>The post <a href="https://graylight-imaging.com/blog/artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis/">Artificial intelligence in medical imaging: from algorithm to diagnosis</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
<div class="et_builder_inner_content et_pb_gutters3">
		<div class="et_pb_section et_pb_section_7 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_7">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_7  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_114  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Once a concept confined to computer science labs and speculative fiction, artificial intelligence is now transforming medicine, especially radiology and medical imaging. But this transformation has been anything but overnight. From theoretical beginnings in the 1940s to today’s deep learning-powered systems capable of interpreting complex scans, AI’s journey and <a href="https://graylight-imaging.com/services/medical-algorithms/">medical algorithms development</a> into medicine is a story of persistence, innovation, and scientific breakthroughs.</p>
<p>This article traces that journey &#8211; from the earliest models and expert systems to the rise of neural networks, deep learning, and the challenges that lie ahead.</p>
<p>The post is based on a scientific paper: The evolution of artificial intelligence in medical imaging: from computer science to machine and deep learning, 2024.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_115  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h2>The foundations of artificial intelligence in medical imaging: from Turing to the first expert systems</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_116  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The roots of AI trace back to a simple question posed by Alan Turing in 1950: “Can a machine think?” His famous Turing test introduced the idea that computers could imitate human reasoning. This sparked decades of exploration into symbolic reasoning and artificial neural networks.</p>
<p>Symbolic AI, which relied on logic-based IF-THEN rules, dominated early efforts. It worked well in structured environments like games, but struggled in dynamic settings like healthcare. In contrast, machine learning introduced a data-driven approach, allowing systems to learn from patterns and experience, better suited to complex medical information.</p>
<p>By the 1960s, researchers like Frank Rosenblatt developed the perceptron, an early neural network model capable of basic image classification, seen by many as the first step toward AI in image analysis. Still, early models lacked the sophistication and scale required for practical use.</p>
<p>In the 1970s and 1980s, expert systems emerged as rule-based programs designed to support clinical decisions. While groundbreaking at the time, they had major limitations: they couldn’t adapt to new data, and they struggled with tasks requiring flexibility, like interpreting variable medical images.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_18">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="768" height="432" src="https://graylight-imaging.com/wp-content/uploads/2025/08/Artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis.png" alt="The quote related to artificial intelligence in medical imaging area: &quot;AI’s next frontier is multimodal analysis – integrating imaging with genetic data, electronic health records, and even lifestyle information. This would enable personalized diagnostics tailored to individual patients.&quot;" title="Artificial intelligence in medical imaging from algorithm to diagnosis" srcset="https://graylight-imaging.com/wp-content/uploads/2025/08/Artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis.png 768w, https://graylight-imaging.com/wp-content/uploads/2025/08/Artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis-300x169.png 300w, https://www.graylight-imaging.com/wp-content/uploads/2025/08/Artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis-120x68.png 120w" sizes="(max-width: 768px) 100vw, 768px" class="wp-image-284107" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_117  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h3>The “AI Winter” and the path to deep learning</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_118  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Enthusiasm for neural networks declined in the late 1960s after it was shown that single-layer models couldn’t solve complex problems, a period known as the AI winter. For years, funding and interest dwindled.</p>
<p>That began to change in the 1990s. New algorithms, along with the spread of personal computers, helped AI regain momentum. These tools reintroduced AI to medical researchers, but the real breakthrough came with the emergence of deep learning.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_119  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h4>Neural networks reborn: AI meets medical imaging</h4></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_120  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>AI’s entry into radiology began modestly. In the 1980s, researchers used simple neural networks to assist with interpreting radiological data, including mammograms. These early tools needed manually crafted features and offered limited precision, but they planted the seeds of transformation.</p>
<p>By the late 1990s and early 2000s, Computer-Aided Diagnosis (CAD) systems became more common in clinical settings. These programs highlighted suspicious areas in X-rays or MRIs, acting as a second set of eyes for radiologists. However, they often produced too many false positives, lacked transparency, and couldn’t learn or improve over time.</p>
<p>The real revolution began with the rise of deep learning, particularly convolutional neural networks (CNNs). CNNs excel at image analysis, learning to identify features directly from raw data without human-defined rules. Combined with advances in GPU computing, open-source libraries (like TensorFlow and PyTorch), and access to large image datasets, deep learning opened the door to high-performance AI in medical imaging.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_121  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><span style="text-decoration: underline;"><strong>Today’s AI models can:</strong></span></p>
<ul>
<li>Detect cancerous changes in images with accuracy comparable to human experts,</li>
<li>Classify tumors and tissue types,</li>
<li>Segment anatomical structures in CT and MRI scans,</li>
<li>Track disease progression,</li>
<li>Evaluate treatment response over time.</li>
</ul></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_122  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h5>From support tool to diagnostic partner</h5></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_123  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Recent studies show that in some scenarios, such as screening mammography, AI can equal or outperform two human radiologists working together. This shifts AI from a passive assistant to an active diagnostic partner.</p>
<p>Beyond imaging, AI technologies like natural language processing (NLP), recurrent neural networks, and generative models allow for automatic data labeling, medical report analysis, and integration into clinical workflows. With affordable computing power and open-source tools, even smaller research centers and hospitals can deploy custom AI solutions.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_124  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Open challenges: what’s holding AI back?</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_125  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Despite the progress, significant challenges remain before AI can be fully integrated into everyday clinical use:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_126  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>a. Clinical validation</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_127  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Many AI systems perform well in research but haven’t been tested in real-world, clinical environments. Prospective trials in mammography, are crucial steps forward, but more multi-center, randomized studies are needed.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_128  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>b. The black box problem</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_129  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>AI models often lack transparency. Clinicians may hesitate to trust decisions they can&#8217;t interpret. This has driven interest in explainable AI &#8211; methods like heatmaps and saliency maps that highlight what the algorithm focused on when making a decision.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_130  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>c. Ethics and regulation</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_131  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>AI systems must be fair, unbiased, and accountable. Unequal training data can introduce bias. Clear legal frameworks are also needed. Initiatives like the EU’s Artificial Intelligence Act aim to standardize ethical use and ensure safety.</p></div>
			</div><div class="et_pb_module et_pb_image et_pb_image_19">
				
				
				
				
				<span class="et_pb_image_wrap "><img loading="lazy" decoding="async" width="768" height="432" src="https://www.graylight-imaging.com/wp-content/uploads/2025/08/AI-in-medical-imaging-from-algorithm-to-diagnosis.png" alt="The quote related to artificial intelligence in medical imaging field: &quot;AI has evolved from abstract theory to a powerful tool in modern medicine.&lt;br /&gt;
In radiology, deep learning models can now match expert performance in many diagnostic tasks, offering faster, more consistent, and scalable analysis.&quot;" title="AI in medical imaging from algorithm to diagnosis" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/08/AI-in-medical-imaging-from-algorithm-to-diagnosis.png 768w, https://graylight-imaging.com/wp-content/uploads/2025/08/AI-in-medical-imaging-from-algorithm-to-diagnosis-300x169.png 300w, https://graylight-imaging.com/wp-content/uploads/2025/08/AI-in-medical-imaging-from-algorithm-to-diagnosis-120x68.png 120w" sizes="(max-width: 768px) 100vw, 768px" class="wp-image-284106" /></span>
			</div><div class="et_pb_module et_pb_text et_pb_text_132  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>The future: toward multimodal, personalised medicine</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_133  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>AI’s next frontier is multimodal analysis &#8211; integrating imaging with genetic data, electronic health records, and even lifestyle information. This would enable personalized diagnostics tailored to individual patients.</p>
<p>New projects are pushing boundaries by developing systems that can learn across various data types simultaneously, mimicking the human ability to synthesize complex inputs.</p>
<p>Meanwhile, traditional machine learning (like decision trees) still has a place, especially where transparency, low cost, and simplicity are essential.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_134  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><h6>Conclusion: AI in medical imaging – a transformative force</h6></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_135  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>AI has evolved from abstract theory to a powerful tool in modern medicine. In radiology, deep learning models can now match expert performance in many diagnostic tasks, offering faster, more consistent, and scalable analysis.</p>
<p>Yet for AI to reach its full potential, it must be clinically validated, transparent, ethically sound, and regulated. Progress is promising, but continued collaboration between doctors, scientists, engineers, and policymakers is essential.</p>
<p>The future of AI in medical imaging isn’t just coming &#8211; it’s already here. And its potential to reshape diagnostics, improve outcomes, and personalize care is only just beginning to be realized.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_136  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Resourcres:</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_137  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Avanzo M., Stancanello J., Pirrone G., Drigo A., Retico A., The evolution of artificial intelligence in medical imaging: from computer science to machine and deep learning, Cancers, 2024, <a href="https://www.mdpi.com/2072-6694/16/21/3702" target="_blank" rel="noopener">https://www.mdpi.com/2072-6694/16/21/3702</a>.</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>
		</p></div>
</p></div>
<p>The post <a href="https://graylight-imaging.com/blog/artificial-intelligence-in-medical-imaging-from-algorithm-to-diagnosis/">Artificial intelligence in medical imaging: from algorithm to diagnosis</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Traditional vs AI models to predict hypertension risk</title>
		<link>https://graylight-imaging.com/blog/traditional-vs-ai-models-to-predict-hypertension-risk/</link>
		
		<dc:creator><![CDATA[Ewa Gunter]]></dc:creator>
		<pubDate>Fri, 25 Jul 2025 08:21:09 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Custom algorithm development]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Predictive algorithms]]></category>
		<guid isPermaLink="false">https://graylight-imaging.com/?p=283836</guid>

					<description><![CDATA[<p>Let’s take a closer look at the most important aspects related to the use of artificial intelligence in this field.</p>
<p>The post <a href="https://graylight-imaging.com/blog/traditional-vs-ai-models-to-predict-hypertension-risk/">Traditional vs AI models to predict hypertension risk</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
			<div class="et_builder_inner_content et_pb_gutters3">
		
<div class="et_pb_section et_pb_section_8 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_8">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_8  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_138  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>High blood pressure is a serious health problem around the world, leading to many heart-related illnesses and deaths. The usual ways of predicting who’s at risk often rely on general stats and common risk factors, but they don’t always consider individual differences.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->Jakub Nalepa, <strong>Machine Learning Architect at Graylight Imaging is a co-author of a publication focused on </strong><strong><em>Artificial Intelligence and Digital Twins for the Personalized Prediction of Hypertension Risk</em></strong><strong>. </strong>The review explores how AI and ML can enhance hypertension risk prediction by integrating diverse data sources, including clinical records, lifestyle factors, and genetic information. It brings together current approaches, identifies key limitations, and highlights the potential of AI-driven, personalized strategies for hypertension prevention and management, while emphasizing the importance of reproducibility and transparency for successful clinical adoption.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->Let’s take a closer look at the most important aspects related to the use of artificial intelligence in this field.</p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_0 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Traditional models fall short in providing individualised hypertension risk</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_139  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Traditional statistical approaches involve the use of clinically relevant variables to explain associations. This is intended to aid in understanding underlying biological and pathophysiological mechanisms. However, these variables don’t always reflect the full picture, missing key influences like genetics, environment, and other personal factors. As a result, these approaches can miss important drivers of the disease, offering only a partial view of how hypertension develops and progresses.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_140  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><strong>Table 1. Common Statistical Techniques Used for Hypertension Risk Estimation</strong></p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:table --></p>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td><strong>Method</strong></td>
<td><strong>Application</strong></td>
</tr>
<tr>
<td><strong>Logistic Regression</strong></td>
<td>Development of a screening tool for hypertension</td>
</tr>
<tr>
<td><strong>Logistic Regression</strong></td>
<td>Creation of a hypertension risk calculator</td>
</tr>
<tr>
<td><strong>Logistic Regression</strong></td>
<td>Prediction of incident hypertension over an 8-year period in women</td>
</tr>
<tr>
<td><strong>Linear &amp; Logistic Regression</strong></td>
<td>Analysis of genetic risk scores in relation to blood pressure changes and hypertension incidence</td>
</tr>
<tr>
<td><strong>Cox Proportional Hazards Model</strong></td>
<td>Estimation of the risk of developing hypertension</td>
</tr>
<tr>
<td><strong>Framingham Hypertension Risk Score</strong></td>
<td>Commonly used tool for hypertension risk assessment</td>
</tr>
<tr>
<td><strong>Recalibration/Validation Studies</strong></td>
<td>Adaptation of the Framingham score for use in diverse populations</td>
</tr>
</tbody>
</table>
</figure></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_1 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Advantages of AI/ML models compared to traditional statistical approaches</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_141  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Compared to traditional statistical models, AI and machine learning approaches are more flexible and scalable, and they don’t depend as heavily on assumptions like normality, linear relationships, or equal variances. These models are typically validated using separate datasets during development, and they’re well-suited to identifying which variables play the most important role in predicting risk.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->AI and machine learning techniques have also shown superiority over traditional statistical methods by reducing bias, automatically handling missing data, controlling for confounding factors, and managing imbalanced datasets—critical elements for building accurate models. For example, Wu et al. used extreme gradient boosting, an AI/ML technique, to predict clinical outcomes in young hypertensive patients (aged 14–39) and achieved higher concordance scores compared to traditional models like the Cox proportional hazards regression and the recalibrated Framingham risk score. Similarly, a study from Japan used an ensemble AI/ML approach to predict new cases of hypertension, outperforming a regression-based classification model.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_142  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><strong>Table 2.</strong> <strong>AI-Based Methods Applied to Hypertension Studies</strong></p>
<p><!-- divi:table --></p>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td><strong>Study</strong></td>
<td><strong>AI/ML Technique</strong></td>
<td><strong>Population</strong></td>
<td><strong>Outcome</strong></td>
<td><strong>Performance (Compared to Traditional Methods)</strong></td>
</tr>
<tr>
<td><strong>Wu et al.</strong></td>
<td>Extreme Gradient Boosting (AI/ML)</td>
<td>Young hypertensive patients (14–39 years)</td>
<td>Predicting clinical outcomes</td>
<td>Higher concordance than Cox regression and recalibrated Framingham score</td>
</tr>
<tr>
<td><strong>A study in Japan <span>(</span><span>2020</span><span>)</span></strong></td>
<td>Ensemble AI/ML approach</td>
<td>General population</td>
<td>Predicting new onset of hypertension</td>
<td>Outperformed regression-based classification model</td>
</tr>
</tbody>
</table>
</figure></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_2 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">The need for new methods in personalized hypertension risk prediction</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_143  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The publication highlighted several weaknesses in current methods for personalized hypertension risk prediction, especially regarding the quality of data sources and the clarity of model explanations. These challenges emphasize the need for more advanced techniques capable of processing complex, unstructured data. In this context, the results show that tree-based ensemble models are the most commonly used across studies.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->While traditional AI and machine learning algorithms remain the foundation of data-driven approaches, deep learning models with higher capacity are still rarely applied. These deep learning methods offer the ability to automatically learn data representations, which is especially valuable when working with unstructured clinical data where manual feature engineering often falls short. Using deep learning could provide deeper insights and enhance predictive accuracy in this area.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->Deep learning’s strength lies in its ability to handle complex and varied data types, making it especially well-suited for analyzing the intricate and diverse nature of genomic data, which can drive advances in precision medicine. However, its use with genomic data is limited by the need for large, high-quality datasets that are often hard to obtain. Similarly, deep learning models can accurately identify accelerometer wear-sites and activity intensity directly from raw data, helping to overcome challenges related to where wearables are placed. Yet, these approaches depend on strong models and consistent input, and factors like inconsistent device use and participant compliance create additional hurdles. This variability makes it challenging to reliably apply these methods in physical activity studies, which in turn affects their effectiveness for predicting hypertension risk.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_144  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><strong>Key strengths and challenges of applying deep learning to genomic data and physical activity research in hypertension prediction include:</strong></p>
<ul>
<li><!-- divi:list -->Deep learning excels at processing complex and diverse data types, making it ideal for analyzing the intricate and varied nature of genomic data, which supports advances in precision medicine.</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->Its application to genomic data is limited by the requirement for large, high-quality datasets, which are often difficult to obtain.</li>
<li>Deep learning can accurately classify accelerometer wear-sites and activity intensity directly from raw acceleration data, helping to overcome challenges related to wearable placement.</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->These methods rely heavily on robust models and consistent input data for reliable performance.</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->Variability in device usage and participant adherence introduces challenges, complicating the effective use of deep learning approaches in physical activity research.</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->Such inconsistencies impact the reliability of these methods when applied to hypertension risk prediction.</li>
</ul></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_3 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h3 class="et_pb_module_heading">Summary: Establishing Essential Quality Standards for AI in Medical Research</h3></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_145  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->The successful integration of AI/ML into medical data analysis, particularly in hypertension management, depends on meeting five key quality criteria: reproducibility, clear intended use, rigorous validation, adequate sample size, and openness of data and software. While recent studies show progress in including larger patient samples for validation, many lack transparency, making their results difficult to reproduce or compare. To advance the field and ensure clinical relevance, researchers, authors, and reviewers must commit to these standards, promoting thorough reporting, clear model purpose, strong validation, sufficient data, and open access to code and datasets. This commitment will improve the quality and impact of AI-driven hypertension prediction and disease management.</p>
<p><!-- /divi:paragraph --></p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_146  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Source: <em>Artificial intelligence and digital twins for the personalised prediction of hypertension risk</em>, <a href="https://doi.org/10.1016/j.compbiomed.2025.110718">https://doi.org/10.1016/j.compbiomed.2025.110718</a></p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>

		</div>
	</div>
	<p>The post <a href="https://graylight-imaging.com/blog/traditional-vs-ai-models-to-predict-hypertension-risk/">Traditional vs AI models to predict hypertension risk</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Machine Learning Experiment Tracking for Medical Software Development</title>
		<link>https://graylight-imaging.com/blog/machine-learning-experiment-tracking-for-medical-software-development/</link>
		
		<dc:creator><![CDATA[Wojciech Malara]]></dc:creator>
		<pubDate>Fri, 04 Jul 2025 12:04:02 +0000</pubDate>
				<category><![CDATA[All]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Medical software development]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Machine learning in healthcare]]></category>
		<category><![CDATA[Medical AI]]></category>
		<guid isPermaLink="false">https://graylight-imaging.com/?p=283756</guid>

					<description><![CDATA[<p>This article covers the basics of experiment tracking, what to log for compliance, popular tools, and how our solution supports regulated environments like medical software.</p>
<p>The post <a href="https://graylight-imaging.com/blog/machine-learning-experiment-tracking-for-medical-software-development/">Machine Learning Experiment Tracking for Medical Software Development</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="et-l et-l--post">
<div class="et_builder_inner_content et_pb_gutters3">
		<div class="et_pb_section et_pb_section_9 et_section_regular et_section_transparent" >
				
				
				
				
				
				
				<div class="et_pb_row et_pb_row_9">
				<div class="et_pb_column et_pb_column_4_4 et_pb_column_9  et_pb_css_mix_blend_mode_passthrough et-last-child">
				
				
				
				
				<div class="et_pb_module et_pb_text et_pb_text_147  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Developing a machine learning (ML) model is usually not a one-time task that could be designed up front and then simply executed yielding the optimal solution. There are simply too many factors that can affect how a model works after it is trained to be able to choose the best ones a priori. These factors include: the data used, the algorithm or model architecture, model hyperparameters, training settings and many others.  In fact, finding the right set of these factors  using experimentation is an inherent part of developing a machine learning model. However, managing these experiments is a challenge in itself and this proces is called <strong>experiment tracking</strong>.</p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_4 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">What exactly is experiment tracking?</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_148  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><!-- /divi:heading --></p>
<p><!-- divi:paragraph -->Experiment tracking is the process of systematically recording and managing all aspects of machine learning experiments, including the data used, model configurations, training parameters, evaluation metrics, and resulting models. This process enables data scientists and ML engineers to compare different runs, identify what changes lead to performance improvements or regressions, and maintain a clear lineage of model development. By capturing metadata such as code version, hyperparameters, dataset version, and outputs, experiment tracking ensures that results are reproducible and transparent.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph -->In collaborative or regulated environments—such as <a href="https://graylight-imaging.com/services/medical-software-development/">medical software development</a>—experiment tracking becomes essential. It supports accountability, facilitates auditing, and aligns with standards that require traceability and documentation. There are multiple tools that help automate this tracking, integrating with model training workflows to store and visualize experiments in real time. Ultimately, experiment tracking enhances productivity, model reliability, and regulatory compliance by making the ML development process more organized and measurable.</p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_5 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Medical software regulatory compliance</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_149  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><!-- /divi:heading --></p>
<p><!-- divi:paragraph -->ML experiment tracking plays a crucial role in ensuring that medical ML systems meet stringent regulatory requirements imposed by authorities like European Commission (MDR, CE Marking), the U.S. FDA (CFR 21, GMLP), and standards such as ISO 13485, ISO 14971, and IEC 62304.</p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:paragraph --><strong>The following table contains a short summary of the benefits of using ML experiment tracking system in the context of regulatory requirements for medical software.</strong></p>
<p><!-- /divi:paragraph --></p>
<p><!-- divi:table --></p>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td><strong>Regulatory Requirement</strong></td>
<td><strong>How Experiment Tracking Helps</strong></td>
</tr>
<tr>
<td>Design History Files (CFR 21, ISO 13485, IEC 62304)</td>
<td>Logs every experiment, dataset, and model for traceability</td>
</tr>
<tr>
<td>Risk Management (ISO 14971)</td>
<td>Tracks performance variations, failure cases</td>
</tr>
<tr>
<td>Audit Trails (CFR 21, IEC 62304)</td>
<td>Maintains immutable records of model development</td>
</tr>
<tr>
<td>Software Verification (MDR, IEC 62304)</td>
<td>Logs test metrics and validation against clinical benchmarks</td>
</tr>
<tr>
<td>Reproducibility (GMLP)</td>
<td>Tracks environment, hyperparameters, and random seeds</td>
</tr>
</tbody>
</table>
</figure>
<p><!-- /divi:table --></p>
<p><!-- divi:paragraph --><strong>Using a validated ML experiment tracking system and integrating it into Quality Management System (QMS) can streamline compliance efforts.</strong></p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_6 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">What should be logged?</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_150  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>The complete list of things that should be logged using the experiment tracking system depends on the requirements for particular model (e.g. its purpose) and company profile. However, there are some categories of information that are worth recording in most of the situations:</p>
<ul>
<li>experiment ID, name, description,</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->experiment type,</li>
<li>timestamps for both the beginning and end of the experiment,</li>
<li>data used, e.g. dataset ID, ground-truth version, ID of data preprocessing experiment ,</li>
<li>model configurations, training parameters, evaluation metrics,</li>
<li>author,</li>
<li>source code version, including possible uncommited changes,</li>
<li>environment, e.g. list of Python packages, machine name, OS, CPU, GPU,</li>
<li>outputs or artifacts created as results of the experiment (e.g. model files, preprocessed data, evaluation metric values).</li>
</ul>
<p>It is worth adding that usually in experiment tracking tools some experiment outputs can be stored as files (attached or referenced), but others, like metric values, can be directly added to the run data, so that they can be conveniently used for comparison of multiple experiments.</p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_7 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Experiment tracking tools</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_151  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Manual experiment tracking is possible, however there are multiple tools that conveniently automate this proces making it much more lightweight and transparent. Most of them share a core set of common features that are essential for machine learning development workflows, including:</p>
<ul>
<li>parameter logging</li>
<li>metric tracking</li>
<li><!-- divi:list-item --><!-- /divi:list-item -->artifact logging</li>
<li>experiment versioning</li>
<li>visual dashboards</li>
<li>comparison of experiments</li>
<li>tags, notes and metadata</li>
<li>integration with ML frameworks</li>
<li>team collaboration</li>
<li>model registry</li>
<li>API and CLI access</li>
</ul></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_152  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p><!-- divi:table --></p>
<figure class="wp-block-table"><strong>The table below describes some popular experiment tracking tools widely used in machine learning with their distinctive features.</strong></figure>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td style="width: 108.583px;"><strong>Name</strong></td>
<td style="width: 423px;"><strong>Key distinctive features</strong></td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>MLflow</strong></td>
<td style="width: 423px;">
<ul>
<li>Open-source (Apache 2.0 license), free to use, widely adopted in research and industry.</li>
<li>Modular design with components: Experiment Tracking, Model Packaging, Model Registry, Serving, Evaluation, Observability.</li>
<li>Supports logging artifacts (e.g., model binaries, plots) and metrics locally or remotely.</li>
</ul>
</td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>Weights &amp; Biases (W&amp;B)</strong></td>
<td style="width: 423px;">
<ul>
<li>Proprietary, includes free tier.Real-time, interactive dashboards for monitoring training.</li>
<li>Sweeps for hyperparameter tuning.Logs code diffs, environment, hardware usage (e.g., GPU stats).</li>
<li>Team collaboration features: comment threads, reports, project-level visibility.Enterprise version offers SOC 2, HIPAA, and GDPR compliance.</li>
</ul>
</td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>Comet</strong></td>
<td style="width: 423px;">
<ul>
<li>Proprietary, includes free tier.Supports tracking for notebooks, scripts, and experiments across many ML frameworks.</li>
<li>Integrated reports for sharing results with stakeholders.Tracks system metrics, Git commits, data versions, and CLI usage.</li>
<li>Offers compliance features for healthcare and finance.</li>
</ul>
</td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>DVC (Data Version Control)</strong></td>
<td style="width: 423px;">
<ul>
<li>Open-source (Apache 2.0 license), free to useGit-like versioning for datasets, models, and pipelines.</li>
<li>Supports experiment branching.</li>
<li>Integrates tightly with Git for full data-code traceability.</li>
</ul>
</td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>Neptune.ai</strong></td>
<td style="width: 423px;">
<ul>
<li>Proprietary, includes free tierClean UI for metadata management at scale.</li>
<li>Enterprise-ready: supports RBAC, audit trails, and HIPAA/GDPR compliance.</li>
</ul>
</td>
</tr>
<tr>
<td style="width: 108.583px;"><strong>TensorBoard</strong></td>
<td style="width: 423px;">
<ul>
<li>Open-source (Apache 2.0 license), free to useNative integration with TensorFlow (limited with other frameworks).</li>
<li>Simple UI for visualizing scalars, histograms, distributions, and model graphs.</li>
<li>Supports embedding visualizations (e.g., t-SNE).Lightweight, great for local debugging.</li>
</ul>
</td>
</tr>
</tbody>
</table>
</figure>
<p><!-- /divi:table --></p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_8 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Our experiment tracking solution</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_153  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>At Graylight Imaging we developed experiment tracking system many years ago, based on our previous experiences. As we work with it in multiple different projects, we introduce changes that improve its usability. Our solution relies on the MLflow platform at its core, but our custom ML Framework (naming things is hard, as we know) determines exactly what we log and how we log it. What is specific to our solution, is the fact that we treat individual stages of processing (data preprocessing, training, inference, evaluation etc.) as separate, but linked experiments. Moreover, we group experiments primarily by dataset version (our dataset versioning system is a whole different topic) and experiment type (processing, training, inference, evaluation, debug). This improves navigation among multiple different experiments and also improves <span data-teams="true">readability</span>, in particular when comparing multiple runs.</p>
<p>Another key feature of our solution is the fact that runs can use other runs as their inputs. Thanks to this, it is possible to re-create full tree of dependencies leading to some result. The picture below displays a simplified example of how some evaluation metric values can be traced back to each step that commited to them. Note that the direction of the arrows represent dependency, not the order.</p>
<p><!-- divi:image {"id":283757,"sizeSlug":"full","linkDestination":"none"} --></p>
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="486" height="814" src="https://www.graylight-imaging.com/wp-content/uploads/2025/07/image.png" alt="" class="wp-image-283757" srcset="https://www.graylight-imaging.com/wp-content/uploads/2025/07/image.png 486w, https://www.graylight-imaging.com/wp-content/uploads/2025/07/image-179x300.png 179w, https://graylight-imaging.com/wp-content/uploads/2025/07/image-72x120.png 72w" sizes="(max-width: 486px) 100vw, 486px" /></figure>
<p><strong>This solution with fine-grained experiments works well for us as it helps in keeping track of each stage of ML model training pipeline and makes the results reusable.</strong></p></div>
			</div><div class="et_pb_module et_pb_heading et_pb_heading_9 et_pb_bg_layout_">
				
				
				
				
				<div class="et_pb_heading_container"><h2 class="et_pb_module_heading">Summary</h2></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_154  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>Experiment tracking is a crucial practice in machine learning that involves recording parameters, metrics, code versions, and outputs to ensure reproducibility, transparency, and efficiency. In medical software development, it supports regulatory compliance by providing traceable model development and audit-ready documentation. The are multiple tools for experiment tracking that offer useful features such as parameter logging, artifact tracking, and experiment comparison to streamline collaborative and compliant ML workflows.</p></div>
			</div><div class="et_pb_module et_pb_text et_pb_text_155  et_pb_text_align_left et_pb_bg_layout_light">
				
				
				
				
				<div class="et_pb_text_inner"><p>References</p>
<p>1. Food and Drug Administration, Health Canada, United Kingdom’s Medicines and Healthcare products Regulatory Agency. (2021). Good machine learning practice for medical device development: Guiding principles. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles</p>
<p>2. Food and Drug Administration (FDA). (2023). Title 21 Code of Federal Regulations (21 CFR), Parts 820, 11, and others. U.S. Government Publishing Office. https://www.ecfr.gov/current/title-21</p>
<p>3. European Parliament and Council. (2017). Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices (EU MDR). Official Journal of the European Union, L 117, 1–175. https://eur-lex.europa.eu/eli/reg/2017/745/oj</p>
<p>4. International Organization for Standardization (ISO). (2016). ISO 13485:2016 – Medical devices — Quality management systems — Requirements for regulatory purposes. https://www.iso.org/standard/59752.html</p>
<p>5. International Organization for Standardization (ISO). (2019). ISO 14971:2019 – Medical devices — Application of risk management to medical devices. https://www.iso.org/standard/72704.html</p>
<p>6. International Electrotechnical Commission (IEC). (2006). IEC 62304:2006 – Medical device software — Software life cycle processes. https://www.iso.org/standard/38421.html</p>
<p>7. MLflow documentation. https://mlflow.org</p>
<p>8. Weights &amp; Biases documentation. https://docs.wandb.ai</p>
<p>9. Comet documentation. https://www.comet.com</p>
<p>10. DVC documentation. https://dvc.org</p>
<p>11. Neptune.ai documentation. https://neptune.ai</p>
<p>12. TensorBoard: Visualizing learning. https://www.tensorflow.org/tensorboard</p></div>
			</div>
			</div>
				
				
				
				
			</div>
				
				
			</div>
		</p></div>
</p></div>
<p>The post <a href="https://graylight-imaging.com/blog/machine-learning-experiment-tracking-for-medical-software-development/">Machine Learning Experiment Tracking for Medical Software Development</a> appeared first on <a href="https://graylight-imaging.com">Graylight Imaging</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Content Delivery Network via Amazon Web Services: CloudFront: graylight-imaging.com
Minified using APC
Database Caching using APC

Served from: graylight-imaging.com @ 2026-04-03 16:25:17 by W3 Total Cache
-->