Will AI replace doctors’ 'gut instincts'?

360info
January 6, 2024 06:30 MYT
Despite rapid advances in the field, AI won't supplant human healthcare workers in 2024. - FREEPIK
DOCTORS’ intuition plays a key role in healthcare, even when computers suggest another treatment approach. But with AI advancing, is that all about to change?
The value of healthcare workers' intuition in effective clinical care has been verified by reports around the world again and again.
From doctors' ability to spot sepsis in critically ill children, to ‘nurse worry' as a ‘vital sign' predictive of patient deterioration, to helping GPs navigate complex patient care — intuition appears to play a large role in supporting high risk patients — even when data or computer outputs suggest another treatment approach.
Artificial Intelligence (AI) has already begun to transform healthcare, and the health sector will only continue to consider AI innovations in 2024 and beyond.
In this increasingly technological world, questions swirl on what is the role of these human hunches in healthcare practice and whether AI is about to overtake doctors' ‘gut feelings' entirely.
What is AI in healthcare and when is it used?
As Thomas Davenport from Babson College and Deloitte consultant Ravi Kalakota explain elsewhere, AI healthcare includes ‘rule-based expert systems', which use prescribed knowledge-based rules to solve a problem, and ‘robotic process automation', which uses automation technologies to mimic some tasks of human workers.
Such technology can help with automated patient monitoring, where alerts are signalled once a rule criterion is met, patient scheduling reminders and medicine management.
Other forms of AI used in healthcare include robots, natural language processing and machine learning.
Robots can help move and stock medical supplies, lift and reposition patients and assist surgeons. One Finnish hospital has launched a €7 billion project, set to be completed in 2028, which will engage robots to collect patient data typically reliant on human physical touch: from measuring pulse, to taking temperature and calculating oxygen saturation.
The release of ChatGPT in 2023 marked a leap forward for AI in popular consciousness. This type of AI, which requires training on large data sets (supported by human feedback), focuses on giving computers the ability to read, support and manipulate human language. Such natural language processing has changed the communication landscape with its language mimicry.
While some note that the hype hasn't quite been reflected in reality, professionals in a range of sectors — including healthcare — now use ChatGPT for correspondence, such as with drafting "sick notes", medication management, or to manage healthcare information.
There are predictions that healthcare Natural Language Processing will be a $USD7.2 billion business by 2028, with this type of AI being deployed to help translate complex published papers for public consumption, for analysis of electronic health records to help identify at-risk patients, and to interact with patients to help with triage or answer healthcare questions.
In 2024, some say this type of AI is likely to focus on more sophisticated language models that power chatbots and virtual assistants, and will be built into word processing programs.
Machine Learning gives computers the ability to learn without explicitly being programmed for a given task. The algorithms driving these types of AI are based on statistical and predictive models. Like Natural Language Processing, Machine Learning often relies on ‘training' from existing data sets, which have been human reviewed and annotated.
Essentially, Machine Learning doesn't automatically know what to look for, and without human-informed training this type of AI tends to provide lots of noise and useless predictions. Once trained, Machine Learning can take previously unseen patient information and apply its prior ‘training' to analyse the data and predict outcomes, or make recommendations.
In healthcare, Machine Learning can recognise patterns which may be missed by humans such as described for AI's role in patient survival of gastric cancer, identification of primary causes of cancers, and reducing breast cancer false positives.
In 2024, Machine Learning algorithms are likely to continue to be used for probing healthcare data analytics, with vast medical data provided by wearables, medical devices and electronic health records.
With all forms of healthcare AI — it is clear that humans are still needed for AI training, evaluation of outputs, and considering the impacts of the AI recommendation.
Artificial versus human intelligence: Who has the edge?
Given the global healthcare workforce shortages, the proverb may be true: Necessity breeds invention.
It may not be long until healthcare includes integrated AI forms where a robot greets you in your native language for your annual check-up using Natural Language Processing, takes your vital signs, and sends a recommendation to the doctor on which patient to prioritise and what investigations need to be ordered using Machine Learning algorithms designed to analyse collected vital signs.
What AI can't do is replace the natural ‘gut feel' of a healthcare professional. And this won't change in 2024.
The clinical reasoning and thinking process that healthcare providers engage is so complex, and the sources of information that the human brain considers in patient care are too numerous to capture with current algorithms. The implicit knowledge an expert relies on for effective clinical care is so deeply embedded in human automation, that methods to get at these data points often fail.
On top of this, AI-accessible data and the AI algorithms themselves can have flaws.
Machine learning can be overly sensitive, leading to over-diagnoses in some patients. Natural Language Processing AIs can act as healthcare trojan horses, where the technology is so convincing in its communication approaches that it tricks the user into thinking it is knowledgeable in the same way a human is.
There are also privacy concerns with such AI applications.
AI typically relies on data input to continue learning — and the clarity about what happens to confidential patient information when input into AI remains an open question for many platforms.
There are also challenges that the healthcare AI field is tackling related to bias and responsibility, and questions about what we consider ‘mundane' and ‘repetitive' tasks which AI can truly take off humans.
In reality (ironically), all existing AI is devoid of the necessary context within which healthcare occurs. It misses the complexity, the empathy, and important data points that human intelligence has access to, and can therefore only replicate specific human tasks.
Our healthcare future: Humans leading, AI supporting
A better description of the role of AI in healthcare in the future might be: "AI won't replace the doctors, but those doctors will be replaced who don't use Artificial Intelligence," as Dr Sangeeta Reddy, director at India's Apollo, has put it.
Healthcare AI is increasingly taking on roles of "clinical decision-making support", meaning the healthcare provider is in charge and human intelligence is prioritised, while AI augments this.
Under this model, AI could be programmed to alert the healthcare provider on all the variables not considered in an algorithm, to help the healthcare provider explore to what extent the AI recommendations can be valuable in a specific context or with a specific patient.
For instance, if an AI is recommending antidepressants for a patient who is also pregnant, it would alert the doctor that such medications aren't yet tested in this population — allowing the doctor to consider other data points in deciding the next best step.
To support this model — in which humans lead, but AI support — the AI-integrated future shouldn't just focus on improving AI. An AI-augmented healthcare system must include healthcare worker training and support in both AI literacy and human intelligence literacy.
Education can also focus on helping healthcare providers recognise when to question patterns; how to challenge predictions; and the value of trusting intuition — essentially building capacity to tolerate uncertainty.
This future-protecting healthcare education would help reduce the risk of "automation bias," whereby humans allow the AI to work autonomously and the algorithm is trusted, even in the face of clear evidence that it's wrong.
AI has given us a valuable gift: the opportunity to explore what aspects of human intelligence are critical in healthcare, and which tasks can be enhanced with technology.
After all, both human brains and AI are wired for prediction, but humans have the power to interrogate, question and evaluate these predictions while AI has computing power beyond a human brain.
Both are needed to manage the complexities of patient care.

Michelle D. Lazarus, SFHEA, PhD, is the Director of the Centre for Human Anatomy Education and Deputy Director of the Centre for Scholarship in Health Education at Monash University in Australia. She is an award-winning educator, having received the Australian Universities 'Teaching Excellence' award amongst others, and is the author of the "The Uncertainty Effect: How to Survive and Thrive through the Unexpected".
#artificial intelligence #doctors #Healthcare AI #Natural Language Processing #English News
;