The Human-AI Clinic: Why We Need to Study the Relationship, Not Just the Technology
Nearly 60% of US physicians worry about over-reliance on artificial intelligence for diagnosis. That’s not a fear of robots replacing doctors, but a deeper concern: the erosion of the patient-doctor relationship. As AI rapidly integrates into healthcare – from radiology and pathology to administrative tasks and even direct patient interaction – we’re facing a critical juncture. The future isn’t about if AI will be used, but how, and crucially, how it impacts the uniquely human elements of medicine.
The Rise of the Unregulated AI Assistant
The initial wave of medical AI focused on “foundation models” – powerful tools trained on massive datasets, excelling in image analysis like retinal scans. These offered clear benefits, bringing specialist expertise to underserved communities. But the landscape has shifted dramatically with the advent of generative AI, like large language models (LLMs) powering chatbots. Unlike their predecessors, these tools are largely unregulated, leading to a rapid, often unchecked, adoption. In China, the open-source LLM DeepSeek was rolled out across 750 hospitals in a matter of months, operating in a “regulatory grey area.” A similar trend is unfolding in the US, with AI scribes and platforms like Open Evidence – used by 40% of US physicians – becoming commonplace.
The Trust Deficit: Transparency, Accountability, and Declining Disclaimers
The core challenge isn’t technological, it’s relational. Trust in medicine is built on transparency, accountability, and reproducibility – qualities often lacking in AI systems. A disturbing trend highlights this: a recent study showed that disclaimers in LLM outputs for medical advice plummeted from 26% in 2022 to just 1% in 2025. This lack of transparency is particularly alarming as patients increasingly turn to chatbots for medical guidance, even for sensitive issues like mental health, with some cases leading to AI-mediated delusions and even harm. The potential for these tools to function as unlicensed therapy chatbots raises serious ethical and safety concerns.
The Paradox of Deskilling: Why Constant AI Assistance Can Hinder Expertise
Simply adding AI to the clinical workflow isn’t a guaranteed improvement. In fact, it can be detrimental. Recent data reveals a surprising phenomenon: endoscopists who used AI assistance for polyp detection saw their skills decline once the tool was removed. This echoes findings in other high-stakes fields like aviation, where pilots require regular manual control training to avoid over-reliance on autopilot. This “deskilling” effect underscores the need for careful implementation and ongoing evaluation of AI’s impact on clinician performance. The science of **AI implementation** – understanding when and how to use AI effectively – is still in its infancy.
Mitigating Deskilling: Lessons from Aviation
The aviation industry offers a valuable blueprint. Mandatory retraining on fundamental skills helps pilots maintain proficiency even with advanced automation. Similarly, in medicine, continuous professional development incorporating scenarios with and without AI assistance could help clinicians maintain their diagnostic and clinical reasoning abilities. This proactive approach is crucial to prevent over-dependence and ensure that AI serves as a support tool, not a replacement for human expertise.
Accountability in the Age of AI: Who is Responsible?
As AI takes on more responsibility in healthcare, the question of accountability becomes paramount. Healthcare professionals are demanding clear guidelines on who is liable when AI makes a mistake. Existing regulatory frameworks, designed for AI as a “Software as a Medical Device,” are proving inadequate for the rapidly evolving landscape of LLMs. A robust framework is needed, one that addresses not only the technical aspects of AI but also the ethical and legal implications of its use.
The Relational Future of Medical AI
The future of medical AI isn’t solely a technical challenge; it’s fundamentally relational. We need rigorous trials, real-world testing, and a collaborative approach that treats clinicians as active partners in the development and implementation of these technologies. Building truly effective human-AI systems requires understanding the biases and frailties of both the clinician and the machine, and designing systems that leverage the strengths of each. This demands a shift in focus – from simply asking what AI can do, to understanding what it should do, and how it will shape the very nature of the patient-doctor relationship.
What are your predictions for the future of human-AI collaboration in healthcare? Share your thoughts in the comments below!