The Great Debate about Lactate Screening
A topic about which many more people should be interested
This debate might seem a bit esoteric for Sensible Medicine but our interests are broad and our readers are a diverse bunch.
A little background. Lactic acid is produced by either anaerobic glycolysis (due to hypoperfusion, microcirculatory dysfunction, or both) or aerobic glycolysis (due to catecholamine stimulation). The presence of lactic acidosis has long been used in the diagnosis of shock states. Over the last decade, routine measurement of serum lactate and its use in screening for malperfusion in the absence of hypotension or anion gap acidosis as part of clinical decision-making has become a ubiquitous part of hospital medicine. However, there two critical questions. Does the prognostic power of an elevated lactate level translate into better care and better outcomes for patients? Does an elevated lactate level lead to interventions that do not help patients but burden care teams?
Sensible Medicine is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.
The Argument for Lactic Acid Screening
Jeffrey Wagner, MD, MCR
The use of lactic acid to screen for malperfusion is far from perfect. As my counterpart may point out, the test leads to more consults for inpatient admission, empiric treatment for conditions like sepsis, and burdens health care teams with additional workload – all events that increase costs. These factors enter into a risk-benefit calculus in what is often a high-risk setting.
Lactic acid testing expanded in 2016 with the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) which standardized lactic acid-based treatment strategies for sepsis. Malperfusion was defined as a mean arterial pressure of less than 60 or a serum lactic acid greater than 2 mmol/L. If either of these criteria were met with a presumed infectious source, the guidelines suggested that patients receive 30 ml/kg of intravenous crystalloid within the first three hours (in addition to early initiation of empiric broad spectrum antibiotics). The Society for Critical Care Medicine has noted that these recommendations (for measurement of blood lactic acid and initiation of intravenous crystalloid) are based on low-quality evidence. While we can acknowledge the limitations of lactic acid in routine testing as well as the lower body of evidence supporting its use, the question at hand pertains to the test’s impact on outcomes.
In clinical practice, we have limited ways to assess perfusion, the holy grail of physiologic parameters. We use blood pressure as a surrogate of perfusion, assuming that the normal range values of arterial blood pressure suggest adequate perfusion. However, the blood pressure in large arteries can differ from what is happening in tissue beds. I can recall many ICU shifts where the arterial line demonstrated a MAP greater than 65 mmHg while a patient’s hands and feet became mottled and the urine output non-existent. Another method of assessment is capillary refill time. This long-standing practice to assess peripheral perfusion is of limited utility with a wide inter-observer variability and lack of standardization. With a lack of robust clinical assessment tools to evaluate an individual patient’s systemic perfusion, lactic acid remains one objective tool in characterizing pathophysiologic processes at the cellular level. Physiologic states in which perfusion is compromised occur in some of the sickest patients clinicians encounter and lactic acidosis can be a clue to the degree of illness and a stimulus to initiate treatments in this high-risk group.
The goal of screening is to initiate treatments that impact a patient’s course. In an ideal scenario the theoretical benefit of increased testing with lactic acid would be to improve outcomes such as reduction mortality. The Centers for Disease Control and Prevention have published Sepsis-related mortality data dating back to 2014, meaning we are able to see national trends in the proportion of the population dying of sepsis. Using Medicare Claims data from 2014-2018, we can compare the proportion of Medicare Beneficiaries who were diagnosed with sepsis to the proportion of the population who died because of a sepsis diagnosis year-after-year. This epidemiologic signature will demonstrate if the current diagnostic approach has improved outcomes for patients.
This graph shows an epidemiologic signature that implies more patients are being diagnosed while mortality rates are declining. These trends coincide with the introduction of Sepsis-3 in 2016, which recommended a treatment approach based on lactic acid measurement. On a more local level, looking at institutional data from my own hospital system (unfortunately data is still private and for internal use) an analysis of over 16,000 patients admitted from September 2017 to December 2019 showed higher hospital- and 30 day- mortality in patients who failed to receive a full sepsis bundle compared to those who did. This has led to a quality improvement initiative building clinical decision support prompt within the electronic health record to call attention for providers to complete the full sepsis bundle in patients screened to meet potential sepsis criteria. I suspect other institutions are adopting similar practices in order to ensure sepsis care is standardized.
This does not mean all comers into the emergency room should be tested for lactic acid. No test is perfect. Like all diagnostics, the formation of one’s pre-test probability remains essential for clinicians to determine where tests in our arsenal are best applied to diagnose and guide treatment for patients. To answer if lactic has been adopted to guide beneficial treatments that improve outcomes, in this case as in many, a picture is worth a thousand words.
Dr. Wagner received his medical degree from OHSU and currently a chief resident at the Kaiser San Francisco Internal Medicine Residency.
The Argument against Lactic Acid Screening
Adam Cifu, MD
All the lactate levels sent in the hospital these days make me crazy. I do not think they help anyone. A screening lactate only tells us that people we know are sick are sick, or falsely reassures us that sick people are not so sick or encourages us to over treat people who are already getting better. Back in the days of giants, when a lactate had to be run to the lab in a “green top on ice,” we seemed to do fine. While I stopped worrying and learned to love the respiratory viral panel and brain natriuretic peptide, I cannot fall for the lactate.
Let’s begin with what we know. High levels of serum lactic acid (generally greater that 2.5 or 4.0 mmol/L) are associated with a worse prognosis. These levels do more than detect people who are already recognized as ill. Elevated lactate levels identify patients with occult sepsis – those patients at high risk for death despite normal blood pressure and mentation. Not only is a single lactate measurement useful but the trajectory of values is also meaningful. Patients who “clear” lactate more quickly during a defined period of time have better outcomes.
In addition to being useful as a prognostic test, lactic acid levels may be useful in guiding the intensity of therapy. One randomized controlled trial found that therapeutically targeting lactate clearance decreased mortality compared to usual care.
But when it comes to screening, we do not know whether screening patients, either in the emergency room or on hospital wards, with lactate levels improves outcomes. And remember, the point of screening is not find disease, or change how we treat people, it is to improve outcomes, ideally mortality. I am sure my sparring partner outlined the plausibility that underlies lactate screening, that detection of lactate elevation identifies an otherwise unrecognized, high-risk subset of patients in whom earlier, aggressive intervention leads to better outcomes. Although intuitive, there are reasons why this hypothesis, if actually tested in a clinical trial, might be disproven.
The best study that defined the test characteristics of a serum lactate level examined adult emergency room patients, in whom a serum lactate level was obtained, and who were admitted to the hospital with an infection-related diagnosis. The sensitivity and specificity of a lactate level > 4.0 mmol/L for predicting death was 36% and 92%, respectively. The test characteristics in patients with a less severe spectrum of illness, as would be expected in a screening population, would be worse. The lower sensitivity would mean that ill patients with low lactate might be undertreated. The lower specificity would mean that patients at little risk of poor outcome might be managed in an inappropriately aggressive fashion, an outcome that would carry its own risk of harm. The low specificity also carries with it the risk of overusing healthcare resources, a risk that compounds the financial cost of checking a serum lactate. It is possible that these increased costs are offset by subsequent savings – if septic patients “caught early” avoid ICU stays, procedures, or longer hospitalizations – but it remains unknown whether such an outcome would actually occur.
There have been no trials directly evaluating lactate measurement as a screening test (and some time spent of clinicaltrials.gov does not make one optimistic). Our only evidence comes from sepsis trials that use lactic acid measurements as one of many criteria for study inclusion and then show better outcomes for enrolled patients who are treated more aggressively. Evidence does suggest that aggressive management of patients with normal blood pressure and modestly elevated lactate values improves outcomes, but this was found in emergency department admissions already identified as septic on the basis of clinical criteria, then risk-stratified by lactate value – patients without lactate measurements were not evaluated. This study also relied on historic controls. In another cohort, the addition of lactate measurement to a brief clinical assessment (the qSOFA score) added no predictive value with regard to identifying patients likely to have sepsis. The argument that sepsis outcomes have improved since 2016, the year we began using lactate screening more regularly, proves the benefit of screening is as strong as the argument that the improved outcomes are related to the ongoing effects of Spotlight, Moonlight, and Arrival (my favorite movies released in 2016). It may also be that outcomes are improving only because we are diagnosing less sick patients with a screening test.
The use of serum lactic acid is useful for prognostication and risk stratification among septic patients. However, we should be careful not to confuse evidence for its utility in risk stratification with those in clinical decision-making. Randomized trials are necessary to evaluate whether lactate screening actually improves patient outcomes or only labels patients as “sick” without providing them benefit. This test has become routine in hospital medicine is without an evidentiary foundation. There is a well-documented history of medicine adopting tests and treatments without a robust evidence base only to later need to abandon them when clinical trials are run. There is even a term for the phenomenom: Medical Reversal.
An acknowledgment and huge thanks to Dr. Patrick G. Lyons who helped me craft a draft of the “anti-lactate” articles some years ago.
A huge thanks to Dr. Jeffrey Wagner for taking the pro side in this debate.