Are physician scientists and other part-timers good doctors?
Aaron Goodman kicks the hornets nest of twitter
Recently, Aaron Goodman, a hematologist at UCSD, tweeted:
It generated a backlash, particularly among doctors who run research labs, some e.g.s
The dialog raises several adjacent questions
Are physician scientists— people who mostly run labs and see patients 1/2 a day a week in clinic and 2-4 weeks a year in the hospital — good doctors?
Is the doctor who specializes in just one disease better at that disease than a doctor who sees a range of problems?
Is there a minimum amount of clinical work below which a doctor is out of practice? (the part-timer question)
Lets tackle these, in reverse order
First, the issue of part time work. There are some data on this question. First, there is a sizable literature on surgical outcomes— the more you do, the better you are. Next is this paper on part time hospitalist work, showing part timers have nearly 1% more deaths than full timers. The size of the death signal is actually massive here.
Of course the limitation to the paper is that patients might not be quasi randomized, and part timers might also get the sicker patients on less desirable shifts because they have to take what they get.
Second: is the doctor who sees one disease better than the doctor who sees more than one? Here I am not talking about cardiologists vs internists for heart failure, but an ALS or Langerhan’s cell histiocytosis doctor vs a neurologist or hematologist. I.e. a true sub sub specialist.
There are no data on this question that I am aware of. Access to clinical trials might be an advantage to a subspecialist, but a recent— well done— study in JAMA shows that at least for cancer medicine, access to trial does not mean you live longer, probably because most investigational agents are bullshit.
Another argument would be that an ALS or Alzheimers or unresponsive, rare sarcoma expert has more experience with a given topic, but here I revisit my earlier point— If you have no disease modifying therapy, what can you do that other’s can’t?
Ironically, for an expert to have better outcomes they have to have a disease modifying therapy AND others can’t know about it! That’s a rare convergence. Finally, some may argue that experts are more willing to try things that might be disease modifying. That argument misses the simple reality that most of these guesses will be wrong because biology is hard. See also:
Now, lets tackle the most divisive implication of Aaron Goodman’s statement. Physician scientists— the MD PHDs of the world— who run a 1/2 day clinic in breast cancer and attend 2 weeks a year on service, for instance, on solid tumor consults are no good as clinical doctors.
I asked Anupam Jena if he could run the Medicare analysis on docs with R01s vs those without, but he felt it would be limited by thin data, and also pointed out that what they lost in clinical skills, they might make up for in intelligence. I say that only to point out that we have no empirical data, and he sees arguments on both sides.
Anecdotally, here is what I do know. Fellows rotate with all the attending and are happy to spill the beans over a drink. They get to witness first hand how people practice and their fund of knowledge. The observations they have shared with me over the last decade:
People who focus on a single cancer are often bad attendings because they have to ask questions for ~90% of the our service to other cancer doctors (because patients have cancers besides breast cancer) and it is basically like having 10 attendings at once— a fiasco of unaccountability. Some doctors proudly say they ask for help, but if you have to ask 10 people each week on service for advice, you are just slowing medical care and should step down. Its not fair to the staff or patients.
Below some amount of clinical work (not sure exactly, but often less than 1 month a year of service), a doctor becomes very bad: unable to manage basic internal medical problems like sugar or high blood pressure in hospitalized patients. Obviously, a smarter person can compensate, but there must be some lower bound to clinical care to stay relevant. Volunteer doctors— e.g. those who work in pharma and do, say 2 clinics a month (total) and no inpatient— are often particularly rusty.
A doctor who does just one disease can become myopic, following a woman with DCIS with annual MRIs even though she has 1q amplification (4+) myeloma and is progressing after autotransplant. Focusing on 1 disease can lead you to miss the forest for the trees, and even miss a tree falling on your head while you examine one little yellow leaf under the microscope.
To be fair, most MD-PHDs are actually not physician scientists. Most have given up the lab, and are largely clinicians or work in other sectors. A few do run labs and see patients, and just a few of those are good at all these things. That shouldn’t be painful to consider, but rather, expected. It would be rare to be great at all these domains.
Finally, I believe there is a rare type of research (not clinical trials — bc obviously that has clinical implications) that does have clinical implications, and I will give one e.g. In cancer medicine, there are moments where doctors give adjuvant drugs without randomized data because they work in the metastatic setting. I remain skeptical and was curious how often doing this is likely to help. How might we estimate that? Perhaps look at the few places where it was rigorously studied in both settings. So we did.
Basically we found that 2/3rds of drugs that “work” in the metastatic setting fail in the adjuvant setting. This makes me much more reluctant to push for adjuvant therapy without data. It shifts my priors. It’s probably mostly toxicity without benefit. In the neo-adjuvant setting, it also delays surgery. There are other types of research— few and far between— that inform daily clinical care, but most physician scientists run labs that are often miles away from direct clinical impact.
Conclusion:
We know very little about what predicts a good doctor, and doctors getting offended on twitter for someone suggesting that lack of practice makes you rusty is the participation trophy culture. I will never understand why people read general statements as if about themselves. If I read, “Indian American hematologist oncologists whose undergrad major is philosophy are idiots" on twitter. My first thought is never: “I am not an idiot!” But rather, “yeah, that’s probably true for all the other ones.” I truly will never understand why MD PHDs or cardiologists as individuals get bent out of shape for stereotypes that we all know about which pertain to the group, and everyone has said a million times.
Next, we do need better metrics of being a good doctor and need to study ways to improve performance. But that certainly doesn’t mean unproven, money grabs like ABIM’s MOC program, contrary to what Morie Gertz says here:
Morie has no credible evidence that ABIM testing improves patient outcomes.
I end with this final suggestion, which I think is provocative and worthy of discussion.
I have spent my clinical lifetime as a generalist, I would argue that this discussion goes even deeper, to the relative values of subspecialisation vs generalisation, and the way they work together.
As an example, I would argue that every new admission to a general hospital should be under a generalist, with the subspecialists working as consultants until the diagnosis and management plans are settled, and then care transferred appropriately.
Or even more provocatively, every acute psychiatric admission should be under a general physician.
Thanks for interesting provocative post. I am a Hospitalist who works seven days on and seven days off. Just based on the sheer volume of patient care provided, my work is scrutinized by multiple parties( patients, families, fellow Hospitalists, specialist colleagues, ED, administrators etc.etc.). If you are doing a lot of work there is nowhere to hide. I do think those who are not that active clinically can more easily fall through the cracks in assessment of work performance for this simple reason.