In today's rapidly evolving technological landscape, innovation is challenging human adaptability such that we are barely able to predict which technology will exist within the next six months. One of the most striking examples of this acceleration is the release of Large Language Models (LLMs). In November 2022, OpenAI launched ChatGPT-3.5. Over the following 18 months, not only were two upgrades released for ChatGPT but users also got the ability to create their own GPTs through more specific training (fine-tuning).
Meanwhile, other companies have released LLMs. There is Google's Bard, which has been upgraded to Gemini and Gemini Advanced, Anthropic's Claude, and the open-source LLMs released by Mistral AI. What has dramatically changed the landscape is the creation of an interface like ChatGPT. This interface shortens the time required for human-to-computer interaction by transforming an input given through natural language into almost any form, from text to images to computer code.
This capability allows users to engage with computers more intuitively and efficiently, leveraging AI to perform a wide range of tasks, all through simple conversational prompts. The innovation of such a tool is well described by the rapid adoption of ChatGPT, which reached 1 million users in just five days and 100 million users in two months. Twitter took two years and more than five years, respectively.
AI's Impact on Healthcare
Such innovations are already making significant inroads into the healthcare sector, with major companies investing heavily in this field. Google has recently announced Med-PaLM M, an advanced model specifically designed for medical applications. On May 1, 2024, Google published a comprehensive 68-page preprint detailing some of the capabilities of this model, which has been trained specifically for healthcare scenarios. This represents a significant step towards integrating AI more deeply into medical practice, aiming to enhance diagnostic accuracy, treatment personalization, and overall patient care.
Other companies are also racing to provide AI-based technology in healthcare. Microsoft has integrated OpenAI's GPT-4 into its Azure cloud services to support healthcare applications, aiming to streamline clinical workflows and improve patient outcomes through advanced AI capabilities.
However, this rapid development and deployment of AI in healthcare comes with real risk. One critical concern is the potential exclusion of key stakeholders — patients and healthcare workers — from the innovation process. As companies focus on capturing market share and accelerating technological advancement, there is a danger that the needs and perspectives of those directly affected by these technologies may be overlooked.
The Overlooked Stakeholders: Patients and Healthcare Workers
Neglecting stakeholders could lead to the creation of technologies that are oriented towards cost savings or improving productivity rather than fostering a holistic sense of well-being for both patients and healthcare workers. Specifically, for healthcare workers, this scenario may create situations where innovation paradoxically detracts from their roles or hampers their work in the interest of the patient by prioritizing resource allocation over personalized care. Additionally, AI-based models may promote inequalities by not adequately considering the needs of minorities.
Moreover, when it comes to human-AI interaction, the concept that “two heads are better than one” is not always guaranteed. AI-based models are trained on extensive datasets and perform complex data processing in ways that may not be easily interpretable. This introduces the potential for asymmetry between AI and human expertise, running the risk of reducing healthcare workers to passive executors of AI recommendations. The broader implications of such an unbalanced dynamic must be examined with caution.
Over time, this imbalance could lead to attenuated clinical reasoning, diminished creative input in the diagnostic-therapeutic journey, demotivation, and an increased reliance on technology. True fulfillment in medical practice stems from engagement with our tasks and surroundings, and this engagement is at risk. By overshadowing the human element in healthcare, AI could inadvertently undermine the essence of medical practice that fosters growth, satisfaction, and effective patient care.
Call to Action: Leading the Charge in Human-Centered AI Innovation
To address these challenges, healthcare workers must take an active role in the AI innovation process. They should not only be participants but leaders in defining priorities and overseeing the development and deployment of AI technologies. This responsibility extends to patients as well, ensuring that their voices and needs are central to the innovation process.
Healthcare workers and patients must champion a human-centered approach to AI innovation. This approach emphasizes the importance of developing AI tools that are designed with the well-being of all stakeholders in mind. Key pillars of digital transformation, such as ensuring robust technological infrastructure, high-quality data, and fostering a digital culture, are essential for creating AI algorithms that are not only usable but also genuinely useful for healthcare workers.
The establishment of Clinical AI Departments is crucial in this context. These departments should develop and monitor AI-based technologies within a multidisciplinary environment, aiming to improve clinical outcomes for patients while also engaging healthcare workers in meaningful ways and optimizing resource allocation without compromising the well-being of stakeholders. By embracing this proactive and inclusive approach, we can ensure that AI innovations in healthcare enhance, rather than hinder, the roles of healthcare professionals and the care they provide.
Jonathan Montomoli, MD, PhD is an Italian anesthesiologist and intensivist with a PhD in Clinical Epidemiology. He is also a co-founder of a university spin-off in the field of telemonitoring. Jonathan's main research interests include microcirculation monitoring and the applications of artificial intelligence in healthcare. Jonathan is part of the AGATA Team, a section of the Italian Society of Anesthesia, Intensive Care, and Pain Medicine (SIAARTI). The AGATA team promotes the ethical adoption of AI in the healthcare sector and advocates for the role of healthcare workers in guiding innovation. https://twitter.com/agata_team
I'll go out on a limb and predict that AI will accelerate the deterioration in the quality of medical care.
You mention fostering a holistic sense of well-being, hampering healthcare workers’ roles or work in the interest of the patient by not prioritising personalized care, reducing healthcare workers to passive executors of recommendations. That already happened during covid in the US and UK via the employment of covid protocols. AI wasn’t even in the picture.