We’ve had a number of articles on Sensible Medicine about medical education. It is probably not surprising as many of our writers and readers have had personal experience with medical education and/or work with trainees everyday. One of our early series (fall 2022) featured these two pieces. Today we have a current trainee extending Dr. Prasad’s most recent critique.
Adam Cifu
Recently, Dr. Prasad criticized medical schools for testing outdated evidence, unrealistic clinical scenarios, and requiring students to know useless and impractical information. These are all valid objections, but the issue with medical education is not just that the test questions are bad (even though they are) or that the content is irrelevant (which it is). A major issue is that the main skill being taught is the recognition and association of buzzwords.
As an undergraduate student, there was nothing that I wanted more than to get accepted to medical school. I remember admiring the upperclassmen who were admitted. I knew that to get in, I needed good grades, hours of shadowing and volunteering, and a few extracurricular activities. The criteria for getting accepted to a top-tier residency or competitive specialty is equally transparent: excel in classes, publish research, crush your Step exams, and do well on rotations. While transparency is good, it leads to gaming the system by medical schools and students.
The students who are most successful in medical school are not the ones who have great bedside manners, or think critically about pathophysiology, or recognize the benefits and flaws to treatment algorithms. The most successful students are those who can seamlessly recite guidelines and diagnostic criteria. By religiously hammering thousands of Anki cards, these students can select correct answers from a prompt. Consequently, they get great grades in medical school and match competitively. However, is this really training students to be good doctors?
While memorization tools like Anki are great for building foundational knowledge such as anatomy or pharmacologic nomenclature, they do not teach students how to think critically. The problem here is that this training leads to a poor and distinct form of medical thinking. Ask a medical student to verbalize how they generate differential diagnoses and you’ll probably notice the student is looking at the symptoms you gave them and trying to recall diseases that present similarly. Diseases (that they remember) that have a lot of overlap will end up towards the top of the differential while those that have less overlap will wind up at the bottom. If they are really good students, they might even try to organize their thoughts by organ systems affected or disease types.
However, generating ideas out of the ether is very difficult. Religious Anki users might have decent outcomes with this method, nevertheless, there are better ways of maximizing yield from this technique than to dedicate four years of your life to it. I can ask ChatGPT to come up with 50 differentials for a given clinical presentation, and it will do so instantly. There are other tools available that will outperform good students at this task. Is this really the best way to occupy our most capable young minds for four years?
Generating differentials should derive from our understanding of first order principles, like pathophysiology and anatomy. We should teach students to work their way backwards from symptoms stepwise, by knowing the proximal pathophysiologic causes of symptoms, rather than trying to skip to the end and guess what diseases might be at play. Although this method is also imperfect (since we don’t know exactly why everything happens in the human body), it will lead to a different and more accurate differential diagnosis. For example, if a patient presents with leg swelling, we could reason that fluid is extravasating from their blood vessels, which can only happen if either hydrostatic pressure is increased, oncotic pressure is decreased, or permeability is increased. Then we look at what affects these variables and continue moving backwards. This could lead us to differentials pointing towards liver issues, inflammatory diseases, and more, which many medical students could leave out of their differential had they gone with their traditional approach.
This type of deductive reasoning is a foundational process. Although most diagnostic reasoning in experts is based on pattern recognition – illness scripts and instance scripts – it is critical to master this early form of reasoning because it provides doctors something to retreat to when faced with difficult, or unfamiliar, cases.
We compare medical students by their class ranks, or Step scores, or clinical grades. But what are we truly measuring? Does a Step 2 score in the 99th percentile indicate a good doctor? If so, we should all be jumping for ChatGPT to take care of our health. The truth is that we recognize that these metrics have little to do with being a good doctor. Nevertheless, we have trapped ourselves in this system. We should be dedicating students’ time to building a thorough understanding of pathophysiology, or diagnostic reasoning or critically analyzing evidence. Instead, we are developing skills which are only useful if a patient miraculously feeds recognizable buzzwords.
Fixing the content in medical school examinations or improving the quality of the questions does not address the real problem with medical school and its graduates. Even if every question was relevant and based on the highest level of evidence available, medical schools would still only produce doctors that seamlessly recite guidelines and recommendations but are incapable of thinking critically. Why? Because that is really the only thing we learned to do in medical school. This creates doctors who are constantly striving to reach perfect “quality metrics” without ever questioning why patient outcomes are not improving. These doctors will maintain the status quo and strive for tiny, incremental improvements, as opposed to creating giant stepwise improvements in health.
Medical schools should focus on developing physicians who can reason and think creatively and independently. The strategy we currently employ throughout medical school dooms us to stagnation. Relying on word association increases cognitive bias, while regurgitating guidelines and recommendations prevents us from recognizing biases and errors in our data. It’s no coincidence that we have failed to produce significant improvements in life expectancy despite having a boom in pharmacologic inventions. We need to produce doctors who can imagine new systems; doctors who are not afraid to challenge assumptions and reinvent paradigms. In short, we need to teach students how to think rather than what to think.
Agustin Marty, DO, is a PGY-1 Family Medicine resident based in Pennsylvania. He is passionate about applying game theory principles to medical decision making and affecting change in healthcare economics.
Photo Credit: Dom Fou
I agree strongly with the prior comment on critical thinking. If the individual has not had a true liberal education with philosophy etc, it may be too late to teach critical thinking?
Respectfully
Dr Casey
Medical school is shockingly devoid of debates. That has got to be one of the best ways to teach critical thinking. Covid masking, gender stuff, cancer drugs. The field is rife with controversy worthy of debates based on studies.