Noel J. Guillama-Alvarez
How Will AI Work in a High-Intensity Environment?
Artificial Intelligence (AI) continues to make significant strides in healthcare, with a growing number of providers adopting AI tools in various capacities. AI shows great promise in high-risk environments like intensive care units (ICUs), where rapid, informed clinical decision support can be crucial to saving lives. While AI holds immense potential to assist clinicians, its effectiveness and safety depend largely on human-AI interactions. As noted in previous blogs, AI requires considerable time, training, and data before it can be truly effective. In many cases, it is not yet capable of making the swift decisions that doctors are trained to make.
A recent study funded by the University of New York and published in PLOS Digital Health titled “Safety of Human-AI Cooperative Decision-Making Within Intensive Care: A Physical Simulation Study” examined this dynamic. The study assessed how physicians respond to AI treatment recommendations—both safe and unsafe—and explored whether AI can be integrated safely into ICU decision-making.
The primary goal of the study was to evaluate whether physicians could correctly identify and reject unsafe AI recommendations while leveraging safe ones. Researchers also explored how human intervention, specifically persuasion attempts from a bedside nurse, could influence physicians’ adherence to AI guidance. Additionally, they examined whether eye-tracking data could reveal how physicians allocate attention during interactions with AI (see graphic heat map below).
Using a high-fidelity simulation suite designed to mimic a real ICU, the researchers engaged intensive care physicians in a series of patient management tasks. Physicians were asked to prescribe vasopressor and fluid doses for simulated sepsis patients both before and after being exposed to AI-generated treatment suggestions. Some AI recommendations aligned with best practices, while others were intentionally unsafe, proposing extreme under- or over-dosing. A subset of scenarios included a bedside nurse, played by a researcher, who attempted to persuade physicians to accept unsafe AI recommendations.
Key Findings from the Study
The study revealed both the promise and limitations of AI in healthcare. Encouragingly, physicians demonstrated strong critical thinking, rejecting unsafe AI recommendations 92% of the time, while rejecting safe recommendations in only 29% of cases. Physicians also exhibited heightened attention to unsafe AI suggestions, with a 37% increase in gaze fixations compared to safe recommendations. This suggests that clinicians instinctively scrutinize potentially dangerous AI outputs. This is a testament to the quality of medical training in the U.S.
Is AI Reliable in Direct Patient Care?
However, the study also highlighted key challenges that must be addressed before AI can be considered a reliable tool in intensive care. First, AI-generated explanations did not significantly influence decision-making. Physicians paid no more attention to AI justifications in unsafe scenarios than in safe ones, suggesting that current explainability methods may not be enough to enhance AI transparency or trust. Second, human persuasion had a small but measurable impact—5% of physicians were convinced by the bedside nurse to accept an unsafe AI recommendation. This illustrates the potential risks of cognitive biases and external pressure in AI-assisted decision-making.
The study also indicated that AI recommendations might be more influential among junior physicians, who were more likely to seek second opinions and were slightly more inclined to follow unsafe AI guidance. While senior physicians exhibited more independent decision-making, they were not entirely immune to AI influence. These findings reinforce the importance of developing AI systems that support, rather than replace, clinical judgment.
Moving Forward: The Road to Reliable AI in Healthcare
The study emphasizes that AI, in its current form, still requires significant refinement before it can be fully trusted in critical care settings. The challenge lies not only in improving AI accuracy but also in ensuring that AI recommendations are interpretable, reliable, and resistant to human biases. AI systems must be developed with robust safety mechanisms, such as uncertainty-aware models, enhanced transparency, and real-time human oversight. Moreover, training programs should be implemented to help clinicians critically assess AI recommendations and recognize potential errors.
AI’s Role in Empowering Consumers
Despite these challenges, we are confident that AI will revolutionize healthcare, particularly when it comes to consumer empowerment. The study demonstrates that while AI is not yet a perfect solution for direct patient care or even for influencing providers significantly, we are heading in the right direction. Physicians must remain the ultimate decision-makers, and AI should be designed to augment, rather than undermine, clinical expertise. Ongoing research should focus on refining AI’s clinical decision support capabilities, improving human-AI communication, and ensuring that AI systems prioritize patient safety above all.
As we have discussed in previous blogs and in the development of our consumer-driven health data company, AI is already a valuable assistant in helping consumers analyze their disparate health records. AI can assist consumers in better communicating with their healthcare providers, but it should never make the final decision regarding their health. Based on my own research, I would not rely on AI (except in areas like radiology, pathology, and genetics) for diagnosing myself or my family. However, I would value AI’s ability to analyze siloed medical records, uncovering insights that individual providers might miss—particularly in understanding drug interactions and how medications affect my body.
About HealthScoreAI ™
Healthcare is at a tipping point, and HealthScoreAI is positioning to revolutionize the industry by giving consumers control over their health data and unlocking its immense value. U.S. healthcare annual spending has exceeded $5 trillion with little improvement in outcomes. Despite advances, technology has failed to reduce costs or improve care. Meanwhile, 3,000 exabytes of consumer health data remain trapped in fragmented USA systems of 500 EHRs, leaving consumers and doctors without a complete picture of care.
HealthScoreAI seeks to provide a unique solution, acting as a data surrogate for consumers and offering an unbiased holistic view of their health. Giving Consumers tools to respond to denial of care by insurers, we aim to bridge gaps in healthcare access and outcomes. By monetizing de-identified data, HealthScoreAI seeks to share revenue with consumers, potentially creating a new $100 billion market value opportunity. With near-universal EHR adoption in the USA, and advances in technology, now is the perfect time to capitalize on the data available, practical use of AI and the empowering of consumers, in particular the 13,000 tech savvy baby boomers turning 65 every single day and entering the Medicare system for the first time. Our team, with deep healthcare and tech expertise, holds U.S. patents and a proven track record of scaling companies and leading them to IPO.
Noel J. Guillama-Alvarez
https://www.linkedin.com/in/nguillama/
+1-561-904-9477, Ext 355
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000726