It goes without saying that we at Sensible Medicine pray at the church of Evidence-Based Medicine. Fields of medicine vary in how strong the “best available evidence from systematic research” leg of the three-legged stool is. If you are a cardiologist, you might be able to rely heavily of RCTS. Some other fields, with a weaker evidence base, might have to lean heavier on basic science or clinical experience and “N of 1” trials. In this piece, Drew Himes makes the sensible and sacrilegious point that maybe it is inappropriate to apply tenets of EBM to psychotherapy.
Adam Cifu
I was recently having breakfast with a psychotherapist colleague who was lamenting a trend she had noticed on a national professional networking site for psychotherapists.
She said, “I couldn’t believe all the people asking questions like, ‘How do you build rapport with your patients?’ or ‘Why do all my patients keep quitting therapy so quickly? Please help!’”
Today the common trope heard in the psychotherapy space is, “Be sure you’re using evidence-based practice!” I can understand the rationale behind that: we want to ensure that we’re providing the best care possible and that what we’re doing is validated by science.
But as I like to say, “Evidence-based psychotherapy is for 80% of the people 80% of the time.”
The problem with applying evidence to psychotherapy is that manuals cannot appreciate the individuality of each patient. In the medical world, we can be sure that statin drugs will work just about the same in all patient populations. The same cannot be said of psychotherapy. There has been a loss of sensibility in psychotherapy manifested by the drive to follow manuals, protocols, and rigid treatments while forfeiting of the art of sitting, listening, engaging, reflecting, and supporting each individual patient as they present for therapy.
The larger problem here is that assessing the evidence of effectiveness in psychotherapy is about self-report of symptoms, difficulties, etc. We can’t use blood tests and x-rays to monitor someone’s depression. And as we all learned in Research 101, self-report research is problematic: social desirability bias, recall bias, exaggeration or underreport biases, conjecture bias, etc.
By its very nature psychotherapy, unlike most other forms of medical practice, does not have ways to measure effectiveness in a population. Even in well done randomized controlled studies to test the effectiveness of Cognitive Behavioral Therapy versus no treatment we still have the problem: the results are gathered through self-report questionnaires and scales like PHQ-9 or GAD-7 (also self-report). These scales are flawed. In addition, rarely do we register that people in the control arm of these studies are also getting better and reporting lower depression/anxiety/distress scores. This becomes especially true when putting a therapeutic intervention (like CBT) up against a support group (which is not treatment based but instead is created as open forum discussion about the experience of mental illness). In these studies patients who did not receive therapy still report getting better.
As a psychotherapist I see evidence-based treatments fail time after time. The reason, I am proposing, is that we’re training our therapists not to see people but instead to see diagnoses and DSM criteria. We’re not appreciating the nuance of people simply being people, with all of their diversities, differences, and individual attributes. And far too often we’re trying to get them to fit into pre-determined boxes. While I am a proponent of evidence-based medicine, we have to see that this approach does not necessarily fit within the context of psychotherapy. EBM is a process of incorporating clinical experience, scientific knowledge, and the best available evidence gained through high quality research. The issue in psychotherapy is that what we understand about the brain science behind mental illness is limited and we have a huge body of research with little external validity. Thus, we have to accept that the best psychotherapy is being done based on foundations of clinical experience, acumen, and a compassionate, welcoming perspective of the therapist.
When you hear a patient’s distress, difficulties, and symptoms, you cannot look on your bookshelf and see what manual most fits their clinical presentation. You have to listen to the source of that person’s frustrations and pain. We have to confront an individual’s issues in an individual way. We can’t just think that an approach that works in a laboratory setting will work the same in a therapy office.
Pushing people into treatments we clinicians think will work for them removes their autonomy and individuality and doesn’t always match with their needs. And it basically tells the patient we’re not listening, caring, or embracing who they are but instead are substituting their needs for our agenda or ego.
I have found in my years of practice that being a “chameleon” is the best approach. Being a “chameleon” means that I bring a unbiased, non-judgmental, and open approach to all patients and then dovetail my analysis, intervention, and engagement with each patient based on their needs, wants, and abilities. I may be more talkative with one patient while more silent and reserved with another, for example. This skill takes years of training and supervision to develop and again is based more in the art of listening and not the science of research-based protocols.
I usually see a patient for 60 minutes once a week (or sometimes less often). Of course, each patient’s personality, view on life, personal belief system, interests, skills, etc. is radically different. I have to be able to change my approach and way of speaking and acting for each person to make the therapy work for that one individual I’m sitting with because during that 60 minutes, that one person is all that matters. I shouldn’t ask my patients to conform to me; I should conform to them.
In my training I was told that psychotherapy is appropriate for those “who are sick enough to need it, but well enough to tolerate it.” Therapy requires some ability of the patient having insight, the willingness to change and grow, and the ability to experience some pain or difficult feedback in the therapeutic relationship. Thus, therapeutic work is inherently difficult.
But what I find makes it even harder is when you enter the therapy space and can tell that your therapist isn’t listening, isn’t considering what makes you the person you are, or just has an agenda.
Therapists need to take more time listening and learning the art of what to say, when to say it, how to say it, and when to shut up and just get out of the way and let the patient roll on.
So, ultimately, what makes psychotherapy good and effective? It is when the clinician listens, encourages, and supports each individual patient with their individual needs, flavors, and varieties and allows the manuals and the evidence to inform but not drive their clinical practice.
Drew Himes, LCSW, CAADC is a clinical social worker and substance addiction professional in private practice in Northwest, Pennsylvania. He specializes in sexual and religious trauma, sex and sexuality therapy, and a broad range of other areas including anxiety and depression.
Photo by Nik Shuliahin
Drew makes a "sensible and sacrilegious point" that good, effective psychotherapy defies the rigid structures of evidence-based medicine (EBM) in favor of focusing on patient-centered flexibility. As other commenters (Susan R Johnson and Sobshrink) have noted, though, the critique of using EBM tools to evaluate the care of the soul is not new.
As Irvin D. Yalom, a prominent existential therapist and emeritus professor of psychiatry at Stanford, has written, empirically validated therapies (EVTs) like brief cognitive-behavioral therapy (CBT) have significant drawbacks when applied in psychotherapy. Yalom laments that the focus on EVT “has had enormous recent impact—so far, all negative—on the field of psychotherapy.” He points out that CBT’s narrow evidence base does not always translate to real-world effectiveness, particularly in treating complex, chronic psychological issues. According to Yalom, patients often require nuanced, individualized support that these protocols simply can’t provide. Yalom even goes further to note that EVT’s design means that clinicians follow manuals and protocols, often to the detriment of long-term, genuine therapeutic growth (Yalom, 2002, The Gift of Therapy, p. 227).
This approach is part of a broader pattern across the medical and mental health fields, where attempts to systematize care with standardized tools often miss crucial, subgroup and other context-specific elements. Sander Greenland, a leading epidemiologist and statistics reformer, takes this notion even further, explaining that methodologies, including randomized trials and statistical models, are only tools — and every tool has its limitations. Greenland, relaying philosopher of science Paul Feyerabend's message, argues that “every methodology has its limits” and that we should avoid “believing that a given methodology will be necessary or appropriate for every application” (European Journal of Epidemiology, 2017). When it comes to care work, whether mental or physical, that means that being overly reliant on formal methods risks ignoring the complex, human elements of each case.
In this sense, Drew’s approach advocates a form of “constrained anarchism,” as Feyerabend would put it, in which therapists can best serve their patients by allowing the science to inform, but not dictate, the treatment process. Anthropologist Anna Tsing’s concept of the “art of noticing” (Tsing, 2015, The Mushroom at the End of the World) also describes the value of this humanistic approach, highlighting how the open awareness of individual and cultural contexts is an “essential scientific practice” (Clancy, 2023, Period: The Real Story of Menstruation, p. 112).
What Drew is suggesting — and Yalom, Feyerabend, Greenland, Clancey, and Tsing all seem to support in their own contexts — is a holistic psychotherapy. Some comments seem to misconstrue this as necessarily an unscientific one, but nothing could be further from the truth. By maintaining openness to observing empirical evidence before their eyes and ears, and adapting course in response, therapists can embrace scientific rigor in a deeper sense than through rote adherence to EBM/EVT methods. This is not shamanism; it is the scientific method.
References
Yalom, I. D. (2002). The Gift of Therapy: An Open Letter to a New Generation of Therapists and Their Patients. New York: HarperCollins.
Greenland, S. (2017). “For and Against Methodologies: Some Perspectives on Recent Causal and Statistical Inference Debates.” European Journal of Epidemiology, 32(1), 3–20.
Tsing, A. L. (2015). The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins. Princeton University Press.
Clancy, K. B. H. (2023). Period: The Real Story of Menstruation. Princeton University Press.
(Drawn from https://wildetruth.substack.com/p/limited-limits.)
So much of what you have written applies to primary care, too. At the risk of being called a bad doctor, I'd say that many doctors give too much service to the evidence and not enough to the patient they are applying the evidence to. People have different abilities to tolerate
-med A or B
-taking more than N meds
-taking meds more than once/day
-affording med A
-doing the lab monitoring required
-having their blood pressure or pulse at goal
-the burden of getting in to see me or to the lab
-the quirks of my personality
-the lack of attention to some other problem that focussing on their CHF causes
There are no trials of me paying more attention to the person in front of me vs me paying more attention to the evidence when these two priorities conflict to see how the same set of patients does over time with the differing priority from their primary care doctor. Trials solve many of these problems by not including patients with them or by making the med free.