A successful academic meeting connects you with people who are thinking about the same things you do, presents you with new questions, and sends you home with ideas to muddle through. The Preventing Overdiagnosis 2025 International Conference was one of those meetings. Karsten Juhl Jørgensen invited me to the meeting to give a talk. I got far more out of the meeting than I gave. Sensible Medicine is also benefiting from the meeting, with a few articles from people who spoke at the conference. This is the first one.
Dr. Alex Burns is a GP and educator who is presently finishing up a PhD at the University of Exeter. I was fortunate to catch a talk that Dr. Burns gave on decontextualized risk information. Usually, when a doctor performs a diagnostic test, she knows the pre-test probability of the disease in question and the test characteristics of the test. These test characteristics were defined in people in whom the test was indicated. Decontextualized risk information comes from the results of a test that nobody asked for. In this excellent essay, Dr. Burns shows us why all doctors should be aware of DRI.
Adam Cifu
I’m in my primary care clinic. My next patient is Mrs B, a 40-year-old woman who’d been triaged earlier by a colleague. The note is brief: “new SOB. For ECG, bloods, and review.”
As soon as she walks in, the reason for her breathlessness is clear. She’s been having very heavy periods and thinks she’s anaemic again. “It happened just like this a few years ago,” she tells me, remembering when she had large fibroids ablated. It’s the same story this time — months of heavy bleeding, slowly worsening shortness of breath, and finally a walk with a concerned friend who pushed her to book the appointment.
She looks pale, and the blood tests helpfully arranged by my colleague confirm significant iron-deficiency anaemia. Less helpful is the D-dimer test he also added. Clinically, this doesn’t look anything like a pulmonary embolism, and I wouldn’t have ordered that test myself. But there it is — a positive result, attached to a serious, potentially life-threatening diagnosis. Instead of a clear plan, I now have an unexpected tangle of complexity and uncertainty to work through with her.
Most clinicians have experienced this: being forced to deal with tests you never wanted. My first experience came early in my career in the Emergency Department. Back in the early 2000s, the UK government’s NHS Plan incentivised seeing and treating all patients within four hours. This time pressure inevitably shifted attention to moving the patient along their diagnostic and treatment pathway at speed — often by following protocol-driven pathways with tests done before a clinician had assessed the patient, or after a very brief senior triage.
It wasn’t unusual to find myself assessing low-risk chest pain patients with reassuring histories, only to discover that Pandora’s double box set — the D-dimer and troponin — had already been requested. If positive, these tests created a web of medical and medico-legal uncertainty to untangle before I could move on. Diagnostic medicine is, at heart, an application of the experimental method: hypothesis generation followed by empirical testing. This method, dating back to early Islamic science, and sustaining the scientific progress of the Enlightenment, is now inverted — the test performed before the hypothesis is formed.
The way I was taught clinical reasoning — and the way I now teach it — starts with the presenting complaint. From there, the clinician explores the patient’s history, develops a list of differential diagnoses, refines it through examination, and then, if needed, chooses diagnostic tests. The decision to test is crucial: sometimes it is more predictive than the result itself, highlighting the power of clinical gestalt. It’s the moment where the clinician must distil uncertain, subjective information into a binary choice — to test or not to test. I might even suggest that this moment defines the diagnostic clinician. Asking for a test is not merely informational; it is an exercise of epistemic authority — a clinician asserting the right to decide what information matters in that diagnostic moment.
Mrs O was 80. I knew her well: an anxious, frequent attender, but otherwise well and independent — until she was clipped by a car and fractured her hip. Her hospital admission and dynamic hip screw had left her anaemic, and the discharge letter asked me to check that her haemoglobin was improving.
She wanted to catch up with me anyway, so she came in after her blood tests. As she hobbled into the consulting room, I opened her record. Before I could greet her properly, an automated warning flashed up: “7% risk of bowel cancer.” Mrs O looked at me in despair.
Here, the diagnostic information didn’t come from another clinician or an unwanted test, but from an electronic risk assessment tool (eRAT) built into the clinical system. It automatically gathers symptom codes, blood results, and demographic details from the record, then calculates population cancer risk — in this case, bowel cancer (anaemia and age >60). A moment ago, this information didn’t exist in the consultation; now it’s flashed up on my screen, uninvited but impossible to ignore.
This challenge to my epistemic authority is, in a sense, self-inflicted. I’ve signed my practice up for a clinical trial testing whether these tools improve cancer diagnosis. Both the eRAT and my D-dimer-trigger-happy colleague share the same laudable intent: to reduce the chance of missing a serious diagnosis. And like them, I don’t want to miss serious disease. But in primary care’s low-prevalence context, such information has a cost: there will always be many more false positives than lives saved. And while humility and curiosity are essential to diagnostic reasoning, so too is confidence in one’s clinical judgement in the face of uncertainty.
Together with colleagues, I’ve coined a term for this unsolicited diagnostic information: Decontextualised Risk Information (DRI). This is information pertaining to diagnosis, which is introduced into a clinical consultation or a diagnostic thought process without being requested by the clinician. And it’s on the rise. I’m increasingly asked to interpret private test results or data from wearable devices. My IT system interrupts consultations with cancer risk scores or sepsis alerts (at very low risk) before I’ve even spoken to the patient. The patient-centred consultation is being eroded by single-diagnosis agendas. Unidimensional risk information is thrust unbidden into the complex, non-linear world of primary care, where multiple potential diagnoses compete. Added to this, rising demand and fewer GPs mean reduced continuity of care: clinicians are increasingly asked to interpret tests they didn’t order.
Good clinical care means using research and data about large groups of people, while also paying close attention to each patient’s unique story, symptoms, and situation. In the traditional model, a clinician’s agency in choosing diagnostic information refines their thinking. DRI, by contrast, sets the stage for conflict rather than synergy.
Our research shows that DRI can dominate consultations and drive defensive practice, triggering cascades of unnecessary tests. Yet there is also hope: clinicians aspire to rise above these defensive instincts and trust in their clinical reasoning, an essential part of their medical identity. As one study participant put it, a good clinician aims “to steer through the nuance ... in the big bubble of grey that is more art than science.”
Dr Alex Burns is a GP in Cornwall, UK. He spends his time seeing patients and supervising doctors in training and medical students. He is a PhD student at the University of Exeter. He is interested in how clinicians make diagnostic decisions under conditions of uncertainty.
Photo Credit: Brett Jordan
I am not a healthcare professional. This article reminded me of an experience 30 years ago that changed how I participate in my own healthcare.
Twenty weeks pregnant and with a healthy toddler at home, unbeknownst to me my doctor ordered a “triple screen” blood test. I received an urgent phone call telling me to come in to the office to learn the results which were communicated like this: an increased risk of Downs and spina bifida. That’s it. When I pressed for more information I was told that the risk was that of a 35 year old woman, though I was 32. How on earth is this helpful information? Also I was told, at this point, the only way to know for sure was to have an amniocentesis which of course would risk the baby’s life for 24-48 hours. But now, with the concern placed in my head, it couldn’t be removed. It seemed necessary to know one way or the other, considering the impact on our toddler but also how we would better prepare our lives. And I didn’t want the remainder of the pregnancy to be anxiety-ridden for the baby—better to know than wonder.
This, after an uneventful first pregnancy and birth—not even an ultrasound.
I had the amnio.
For 30 years, every time I find myself on the bridge I was driving over when I saw the dash clock indicating the amnio risk period has passed and felt the baby kicking, I remember. I remember the unnecessary angst that unnecessary test caused.
I no longer consent to medical tests, for myself or those in my care, unless and until I understand their purpose and possible outcomes.
Brilliant!
When you consider the the way most social media works -- pulling bits of information from richer contexts, stripping them of that context, and throwing them all out there for others to mix and match with other "decontextualized" bits of information -- I'd say the same phenomenon is happening in all arenas of society, not just healthcare. And just as you say, it's leading to conflict, rather than synergy. In fact, the purpose of so many seems to be to sow discord, not empathy or cooperation.