Artificial intelligence is no longer a distant promise in medicine — it is already inside the clinic, inside the wearable on a patient's wrist, and inside the algorithm that reads a pathology slide before the pathologist does. A landmark perspective paper published in Frontiers in Medicine by Giovanni Briganti and Olivier Le Moine of the Université Libre de Bruxelles maps the current state of AI-powered medical technology across cardiology, neurology, gastroenterology, oncology, and beyond — and confronts, with unusual candour, the legal, ethical, and educational crises that the technology is quietly creating inside healthcare institutions. The paper does not argue that AI will replace physicians. It argues something more nuanced and more urgent: that medicine is being transformed whether physicians are ready or not, and that the profession's readiness is, at present, dangerously inadequate.
The conceptual framework Briganti and Le Moine deploy is augmented medicine — defined as the use of intelligent digital technologies to enhance, rather than substitute, clinical decision-making. That framing is deliberate. The fear of physician replacement by AI is well-documented in the literature and frequently cited as a primary source of resistance among healthcare professionals. The authors acknowledge that fear directly, but cite a growing body of evidence — including foundational work by Dr. Eric Topol of the Scripps Research Translational Institute, published in Nature Medicine in 2019 — that the combination of human and artificial intelligence consistently outperforms either operating alone. The question, therefore, is not whether AI belongs in medicine. It is how medicine reorganises itself around a technology that has already arrived.
The scope of current AI deployment in clinical settings is wider than most non-specialist observers appreciate. AliveCor received FDA approval as early as 2014 for its Kardia mobile application, which enables smartphone-based ECG acquisition and automated detection of atrial fibrillation. The REHEARSE-AF study, published in Circulation in 2017, confirmed that remote ECG monitoring via Kardia identifies atrial fibrillation more reliably than routine clinical care. Apple followed with FDA clearance for atrial fibrillation detection on the Apple Watch Series 4, enabling continuous cardiac rhythm monitoring by a consumer device worn by tens of millions of people. In oncology, Paige.ai achieved FDA breakthrough designation for an algorithm capable of diagnosing cancer in computational histopathology with accuracy sufficient to allow pathologists to redirect their attention toward complex cases. These are not experimental prototypes. They are approved, deployed technologies operating in clinical environments today.
AI Across Medical Specialties
In cardiology, AI applications extend well beyond arrhythmia detection. Machine learning models applied to electronic patient records have demonstrated superior predictive accuracy for acute coronary syndrome and heart failure readmission compared to traditional clinical risk scales — findings reported in JAMA Internal Medicine and Circulation: Cardiovascular Quality and Outcomes respectively. Comprehensive reviews of AI in cardiology published in the Revista Española de Cardiología in 2019 note, however, that performance varies substantially with sample size, and that the generalisability of models trained on single-institution datasets remains an unresolved problem across the field.
In gastroenterology, convolutional neural networks have been applied to endoscopic and ultrasound imaging to detect colonic polyps, diagnose gastroesophageal reflux disease and atrophic gastritis, predict outcomes in gastrointestinal bleeding, and estimate survival probability in oesophageal cancer and colorectal metastasis. The breadth of these applications reflects gastroenterology's particular suitability for image-based AI: the specialty generates enormous volumes of standardised visual data across millions of procedures annually, providing the training datasets that deep learning algorithms require to achieve reliable performance. In neurology, the Empatica Embrace wearable — which received FDA approval in 2018 — combines electrodermal sensors with AI-driven pattern recognition to detect generalised epilepsy seizures in real time and automatically alert caregivers and treating physicians via a paired mobile application. Patient adoption studies have found unusually high acceptance rates for seizure detection devices, contrasting with the more ambivalent reception of cardiac monitoring wearables among elderly populations.
In endocrinology, Medtronic's Guardian continuous glucose monitoring system, paired with IBM's Watson AI platform through the Sugar.IQ application, provides patients with diabetes personalised predictions of hypoglycaemic episodes based on longitudinal glucose pattern analysis. The system represents the 4P model of medicine — Predictive, Preventive, Personalised, and Participatory — that Briganti and Le Moine identify as the primary driver of patient enthusiasm for AI-powered healthcare. Qualitative research on patient experience with continuous glucose monitoring published in BMC Endocrine Disorders in 2018 found that while patients expressed confidence in the system's notifications, many simultaneously reported feelings of personal failure when glucose levels deviated from targets — a reminder that clinical technology interacts with psychological experience in ways that pure performance metrics do not capture.
The combination of human and artificial intelligence outperforms either alone — and medicine has not yet built the educational infrastructure to make that combination work.
— Briganti G and Le Moine O, Frontiers in Medicine, February 2020The Validation Crisis Hiding in Plain Sight
The most technically serious challenge Briganti and Le Moine identify is not resistance from physicians or ethical complexity — it is a quiet replication crisis embedded in the AI medical literature itself. A landmark systematic review and meta-analysis published in The Lancet Digital Health in 2019, led by Xiaoxuan Liu and colleagues, compared the diagnostic performance of deep learning algorithms against healthcare professionals across imaging-based disease detection. The headline finding — that deep learning performed comparably to clinicians — attracted widespread attention. The methodological finding received far less: 99% of studies included in the review were judged to have unreliable design, and only one in a thousand validated their algorithms against imaging data sourced from a population other than the one used for training.
That second statistic is the more consequential one. An algorithm that achieves 95% accuracy on the dataset it was trained on but performs at 70% accuracy on a different hospital's patient population is not a clinical tool — it is an overfitted model. The phenomenon, known as spectrum bias, arises when training data does not adequately represent the diversity of patients an algorithm will encounter in real-world deployment. Briganti and Le Moine argue that continuous post-deployment recalibration of algorithms — adjusting their parameters as patient demographics shift over time — should be a regulatory requirement rather than a best-practice recommendation. They further advocate for open science frameworks in medical AI research, with shared datasets and published methods enabling independent replication, while acknowledging that commercial AI developers have strong financial incentives to resist precisely that transparency.
Ethics, Surveillance, and the Ownership of Patient Data
The ethical terrain mapped by Briganti and Le Moine is less about science fiction scenarios of autonomous AI diagnosis and more about the mundane but consequential mechanics of ongoing digital monitoring. Wearable companies have concluded agreements with insurance providers and governments to distribute health monitoring devices at population scale, with stated goals of inducing lifestyle change and reducing downstream healthcare costs. The authors identify a specific and underexamined risk in this arrangement: continuous biometric monitoring has the structural potential to increase stigma around chronically ill patients and to penalise individuals who cannot or do not conform to algorithmically defined standards of healthy behaviour — including through reduced access to health insurance. This debate, the paper notes, has received almost no serious attention in health policy literature despite its direct implications for healthcare equity.
The question of data ownership sits at the intersection of these ethical concerns and existing legal frameworks that have not kept pace with technological development. The authors survey three dominant positions in the literature: common ownership of patient data to enable personalised medicine at population scale; institutional ownership by healthcare providers; and patient ownership, toward which consensus in the academic literature is now shifting. Research published in JAMA in 2017 by Dr. Eric Topol and colleagues argues that patient ownership of health data produces positive effects on engagement and information sharing, provided that formal data use agreements between patients and clinicians are in place. The legal framework governing liability when a physician follows — or rejects — an AI algorithm's recommendation remains undefined in most jurisdictions, exposing clinicians to potential legal consequences that no existing professional indemnity structure was designed to cover.
Building the Augmented Doctor
The paper's most forward-looking section concerns medical education. Briganti and Le Moine argue that the current medical curriculum — built around anatomy, physiology, pharmacology, and clinical reasoning — does not equip graduates with the competencies required to evaluate, deploy, or critically assess AI-powered clinical tools. Several institutions have responded by creating hybrid medical-engineering curricula that incorporate computational science, algorithmics, coding, and mechatronic engineering alongside traditional clinical training. The authors describe the graduate of such programmes as an augmented doctor — a clinician capable of both delivering patient care and participating meaningfully in the digital transformation of healthcare institutions. In leading hospitals globally, this role is increasingly formalised under the title of Chief Medical Information Officer.
Parallel to curriculum reform, the paper calls for mandatory ongoing digital medicine education for practising physicians — retraining programmes that would allow established clinicians to develop the literacy required to engage critically with AI tools entering their specialties. The need is urgent: AI-powered diagnostic applications are receiving regulatory approval on a timeline that significantly outpaces the educational infrastructure intended to support their clinical adoption. A physician who cannot interrogate the assumptions, training data, or failure modes of an algorithm they are expected to rely upon is not a safeguard against that algorithm's errors — they are a conduit for them.
"Healthcare professionals stand in a privileged position to welcome the digital evolution and be the main drivers of change — but a major revision of medical education is needed to provide future leaders with the competences to do so."
What This Means for the Future of Clinical Practice
The trajectory Briganti and Le Moine describe is not one of dramatic disruption but of accelerating integration — AI capabilities expanding steadily into clinical workflows that were designed without them, creating friction wherever the technology's assumptions diverge from clinical reality. Ambient clinical intelligence — systems capable of passively monitoring a clinical encounter, extracting structured data, and automatically completing electronic health records — represents perhaps the most practically transformative near-term application, precisely because it targets the administrative burden that research consistently identifies as the primary driver of physician burnout. Natural language processing tools capable of generating clinical documentation in real time are already in commercial deployment; their widespread adoption would redirect physician cognitive resources from documentation back to the patient interaction that documentation is supposed to capture.
The paper closes with a call that has only grown more urgent in the years since its publication: for health policy to engage seriously with the ethical and financial dimensions of AI in medicine before regulatory frameworks become permanently reactive to technologies they were never designed to govern. The clinical benefits of artificial intelligence in medicine are real, documented, and expanding. So are the risks — to equity, to privacy, to professional liability, and to the integrity of the scientific evidence base on which clinical decision-making depends. Managing that duality requires not just better algorithms but better institutions, better education, and a medical profession that understands, at a technical level, the tools it is increasingly being asked to trust.
Source & Citation
Primary Source: Briganti G and Le Moine O (2020). Artificial Intelligence in Medicine: Today and Tomorrow. Frontiers in Medicine, 7:27. doi: 10.3389/fmed.2020.00027. Published February 5, 2020. Open-access under Creative Commons Attribution License (CC BY).
Authors: Giovanni Briganti (Medical Informatics & Epidemiology, School of Medicine and School of Public Health, Université Libre de Bruxelles, Brussels, Belgium) and Olivier Le Moine (Medical Informatics, Hôpital Erasme, Université Libre de Bruxelles, Brussels, Belgium). Correspondence: giovanni.briganti@hotmail.com.
Key referenced studies cited in this article: Topol EJ, Nature Medicine (2019) — high-performance medicine and human-AI convergence; Liu X et al., The Lancet Digital Health (2019) — systematic review of deep learning vs. clinicians in medical imaging; Halcox JPJ et al., Circulation (2017) — REHEARSE-AF study on remote ECG monitoring; Lawton J et al., BMC Endocrine Disorders (2018) — patient experience with continuous glucose monitoring; Mikk KA, Sleeper HA, Topol EJ, JAMA (2017) — patient data ownership and health outcomes; Campanella G et al., Nature Medicine (2019) — computational pathology using deep learning on whole slide images.
No comments yet. Be the first to share your thoughts.
Leave a Comment