The US POINTER trial was positive but mostly uninformative
Clinical trials in dementia are scarce but still need to be rigorous to help
Adam and I are happy to publish this guest post from MD-PhD student Clayton Mansel. Clayton attended the presentation of the US POINTER trial—an RCT testing a structured lifestyle intervention program in older patients with dementia. This is an instructive post because future-doctor Mansel shows us how recruitment of patients can inhibit making strong inferences from the trial results. It’s an excellent critical appraisal, which we aim to promote here at Sensible Medicine. JMM
Last month, at the Alzheimer’s Association International Conference (AAIC) in Toronto, the US POINTER randomized controlled trial of a Structured vs Self-Guided Multidomain Lifestyle Intervention for Global Cognitive Function was presented.
JAMA has published the widely covered manuscript. I was lucky enough to attend the presentation in person, and, while I applaud the study team for successfully executing a complex trial like this, I ultimately left feeling disappointed.
Study design and participants
US POINTER was a multisite, single-blind RCT enrolling 2,111 participants between the age of 60 and 79 who had a sedentary lifestyle and poor diet and at least 2 additional risk factors for dementia.
Participants were randomly assigned to either a structured or self-guided intervention, which “encouraged increased physical and cognitive activity, healthy diet, social engagement, and cardiovascular health monitoring.”
The primary outcome was global cognitive function assessed as a composite measure for executive function, episodic memory, and processing speed over 2 years of follow-up.
First, I think this is a good question; and it’s studied in the right population.
Dementia prevention needs more RCTs and is a field riddled with low-quality observational research. A multi-domain lifestyle intervention is also cost-effective. Compare, for instance, the cost of the new anti-amyloid therapies for Alzheimer’s Disease.
The topline result? A statistically significant difference with a p value <0.05!
Unfortunately, this study fails to answer its question because of at least two major flaws: no control group and the participants were already healthy.
Flaw #1: there was no control group
Both the structured and the self-guided group met in-person for sessions focused on encouraging a healthy lifestyle. The only difference was that the intervention group met more often and was led by certified interventionists. Thus…both groups received an intervention and were encouraged to live a healthier lifestyle. None of the participants were randomized to a group that just went about living their life like they did before.
Why did the authors make that decision?
On stage at AAIC, the authors argued it was unethical to include a true control group because participants were at risk—an argument that might hold in oncology, where withholding life-saving drugs would be harmful. But a lifestyle intervention isn’t a prescription drug.
Participants in the self-guided group had full access to publicly available health resources; they didn’t need the study team to facilitate their lifestyle change.
Calling it unethical to withhold a lifestyle intervention not only overstates the risk, but it also undermines the study’s ability to answer its core question: does a costly, structured intervention improve cognition beyond what motivated individuals can achieve on their own? Without a true control, we will never know.
It’s fair to argue that a true control group might lead to dropouts from disappointed participants and I sympathize with that concern. But the solution need not have been so heavy-handed. I can think of more creative ways to minimize attrition without compromising the study’s integrity, such as offering delayed interventions or minimal-contact controls.
Flaw #2: the participants were already living a healthy lifestyle
The presenting authors appeared excited to show the high adherence in the intervention group—with >90% attendance to the in-person meetings. This is, indeed, impressive as adherence is often a challenge in lifestyle intervention trials in patients with dementia.
The problem: the adherence data presented also showed a flaw in the study: the participants were already living a healthy lifestyle before entering the study.
Unfortunately, the authors didn’t include any of this data in the supplement of the paper1, so you’ll have to rely on my phone photos of the slides at the conference.
Take a look at the Fitbit “very active” minutes in the intervention group over 2 years:
Notice that even at time zero, participants were meeting the goal of 90 very active minutes per week. It’s the same for diet, shown below as the median of the MIND diet score, a measure of diet quality ranging from 0 to 14:
The participants’ diet was already quite good. This is despite the fact that the intervention group did not talk about diet until the second month of the study so as not to overwhelm the participants!
I wondered if I was interpreting the data incorrectly. However, the authors then showed the adherence to the strength training program, which revealed a steady increase until a peak at 25 weeks, similar to what one would expect if the participants were not doing any strength training prior to the study.
The authors discussed the difficulty in recruiting sedentary in a group of adults who were already practicing a healthy lifestyle.
Based on the data shown above, the authors failed to enroll a population likely to benefit from structured interventions—as these patients were healthy enough at the start to be doing activities promoted in the intervention arm.
Lessons learned
Lifestyle trials need a true control group with no facilitated contact—otherwise, you can’t isolate the effect of the intervention. The cognitive benefit seen in the intervention arm was likely due to social interaction from frequent group meetings, not the structured program itself. That would explain why the effect was strongest in year one and faded as meetings tapered off.
If simply organizing social groups drives the benefit, that’s far cheaper than hiring certified interventionists. But we we’ll never know, because the study lacked a control group.
And even if the results were valid, they wouldn’t generalize as these participants were already healthy, unlike many older adults.
Whether lifestyle interventions should be covered by insurance is a separate question, but US POINTER doesn’t offer strong evidence for or against—despite the positive media coverage.
Footnotes:
[1]: I emailed the corresponding authors, and they said they were working on another paper that includes this data. They also said that the adherence data is available only for the structured intervention group so we will never know if the supposed “control” group differed.
For a study purportedly investigating patients who are sedentary and with poor diet…the enrolled subjects did not seem very sedentary or eating particularly poorly…even at study outset. So the results would not seem generalizable to the study authors’ intended population.
And the participants also seemed very motivated to adhere to the program…which is also a trait that may lack generalizability, esp without a true control group as you’ve noted.
Wouldn't the two stated flaws most likely each cause underestimation of the relative benefit? Though even still it would be unclear how much benefit is just from socializing. Perhaps the control group should just be pure socializing.