Artificial Radiologist – the difference between machine learning and artificial intelligence in healthcare
By: Krzysztof Kotowski
The difference between machine learning (ML) and artificial intelligence in healthcare (AI) is frequently misunderstood by many people. The vision of AI promoted in science fiction and marketing slogans inflates the expectations and imagination about current ML algorithms. We hear about AI beating humans in complicated games like Go  or StarCraft ; about AI driving autonomous vehicles; about artistic AI composing music  and poems ; or recently about humanoid AI-driven Tesla Bots . All these applications are great ML achievements, but they are nowhere near being “intelligent”. Just like ML does not deserve to be called AI. I would like to discuss this issue by presenting a hypothetical story of “artificial radiologist” DrEddy and human radiologist Adam working in the same hospital in 2022. The story is based on several years of my cooperation with machine learning and radiology experts in Graylight Imaging (former Future Processing Healthcare) projects like Sens.AI (sensai.eu) or Cardio4D (cardio4d.pl).
Beyond any doubt, we are already technically capable of creating DrEddy using state-of-the-art ML algorithms. ML is a technique in which a computer is provided by a massive amount of data (i.e., thousands of medical imaging reports) and it finds patterns connected with specific tasks (i.e., detection of different diseases) during a process called training. At this point, DrEddy resembles a young resident radiologist who gains experience by watching scans and being guided by a senior radiologist. However, for DrEddy it takes days instead of years to master virtually any disease of any organ using any imaging modality (MRI, CT, USG, X-Ray, etc.). Additionally, we have evidence that DrEddy provides similar accuracy in diagnosis  or even outperforms Adam in tasks like brain tumor segmentation , , lung cancer screening , or predicting patient outcomes . DrEddy is CE and FDA certified, works in 24/7 shift without complaints, and gives faster and more repeatable diagnoses than Adam. DrEddy has a powerful language module based on GPT-3 , so it is able to analyze the patient documentation, write amazingly precise screening reports, and establish simple communication with patients and other physicians. Some people in the management of the hospital start to question the need of employing Adam, and they ask him for any reason to keep him on board. Fortunately, Adam knows the difference between AI and ML and gives them a convincing list of 4 reasons.
The true understanding of artificial intelligence in healthcare
The first, and maybe the most crucial limitation of DrEddy is that it does not understand the essence of the disease it is diagnosing. It is trained on a big but limited number of imaging samples from a specific group of patients. It makes DrEddy helpless when diagnosing rare or atypical cases. The complexities of human anatomy are infinite, and there will be always cases that do not fit into DrEddy’s patterns. The human radiologist has a true understanding of the living organism behind the images while machine learning (or artificial intelligence in healthcare) is a mindless pattern comparator.
There are cases in radiology that are not well-defined. Sometimes it is necessary to perform multiple different imaging procedures to find a source of the problem, and there is no algorithm to follow. It is not feasible nor safe for the patient to perform all possible scans, so radiologist has to follow the intuition which is a concept unreachable for DrEddy.
Different patients have different levels of emotional resilience and different mental preparation for medical treatment. A good human radiologist may assess it during the interview, and adjust the diagnostic procedures and the way he informs the patient about the results of the examination to improve the psychological comfort of the patient. DrEddy is not able to tell if someone has claustrophobia, autism, depression, or is just scared of the procedure without understanding the emotions of the patient.
Artificial intelligence in healthcare requires responsibility and self-awareness
Finally, the legal and ethical aspects of DrEddy and artificial intelligence in healthcare work need to be considered. In Future Processing Healthcare, we verify the results of our models with a group of experienced radiologists in the Mean Opinion Score procedure (as described here). There are always some cases where experts do not agree about the diagnosis, so there is a high chance that DrEddy also won’t give a perfect diagnosis every time. Who would be responsible for any medical errors caused by DrEddy? The programmers? It may be very hard to resolve. What if it turns out that the error was caused in the data by the “teacher” of DrEddy? No matter the answer, DrEddy itself cannot be penalized in any way besides turning it off. It is not aware of the consequences of its actions, so should we even allow DrEddy to decide about the health and life of the patient?
This list was more than enough for hospital management to keep Adam. The management and Adam agreed that the best solution for the whole situation is cooperation between Adam and DrEddy. This symbiosis joins all the advantages of both human and ML, …at least until the time when Artificial General Intelligence (AGI) is developed. AGI is a term for a “strong” AI (assuming ML as a “weak” AI) which describes the ability of a machine to simulate consciousness, feelings, and self-awareness. The AGI module would allow DrEddy to think, feel, and create a list of reasons for firing Adam. However, over 40 institutions are researching AGI , but it will stay a science-fiction domain for at least a couple of years. Fortunately, we should say, because scientists such as Stephen Hawking or Stuart Russell warned that AGI is probably the greatest danger for humanity. Thus, organizations like OpenAI were founded to promote responsible AI development.
 D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016, doi: 10.1038/nature16961.
 O. Vinyals et al., “Grandmaster level in StarCraft II using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, Nov. 2019, doi: 10.1038/s41586-019-1724-z.
 “Musicians Are Using AI to Create Otherwise Impossible New Songs,” Time. https://time.com/5774723/ai-music/ (accessed Sep. 26, 2021).
 “This AI Poet Mastered Rhythm, Rhyme, and Natural Language to Write Like Shakespeare,” IEEE Spectrum, Apr. 30, 2020. https://spectrum.ieee.org/this-ai-poet-mastered-rhythm-rhyme-and-natural-language-to-write-like-shakespeare (accessed Sep. 26, 2021).
 B. Gomez, “Elon Musk warned of a ’Terminator’-like AI apocalypse — now he’s building a Tesla robot,” CNBC, Aug. 24, 2021. https://www.cnbc.com/2021/08/24/elon-musk-warned-of-ai-apocalypsenow-hes-building-a-tesla-robot.html (accessed Sep. 26, 2021).
 T. S. Cook, “Human versus machine in medicine: can scientific literature answer the question?,” Lancet Digit. Health, vol. 1, no. 6, pp. e246–e247, Oct. 2019, doi: 10.1016/S2589-7500(19)30124-4.
 J. Nalepa et al., “Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors,” Artif. Intell. Med., vol. 102, p. 101769, Jan. 2020, doi: 10.1016/j.artmed.2019.101769.
 J. R. Mitchell et al., “Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data,” J. Med. Imaging, vol. 7, no. 5, p. 055501, Oct. 2020, doi: 10.1117/1.JMI.7.5.055501.
 D. Ardila et al., “End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography,” Nat. Med., vol. 25, no. 6, pp. 954–961, Jun. 2019, doi: 10.1038/s41591-019-0447-x.
 J. Lee, “Is Artificial Intelligence Better Than Human Clinicians in Predicting Patient Outcomes?,” J. Med. Internet Res., vol. 22, no. 8, p. e19918, Aug. 2020, doi: 10.2196/19918.
 T. B. Brown et al., “Language Models are Few-Shot Learners,” ArXiv200514165 Cs, Jul. 2020, Accessed: Sep. 26, 2021. [Online]. Available: http://arxiv.org/abs/2005.14165.
 S. Baum, “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy,” Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3070741, Nov. 2017. doi: 10.2139/ssrn.3070741.
Contact us if you have any questions!
Read the previous post: Automated Medical Image Analysis using AI: The Why, The How, and The What