The Solid Benefits of Positive Psychology Interventions

Bruce Daisley (September 27, 2022) posted “Why Quick-Fix Resilience Doesn’t Work” here on PsychologyToday.com. His claim: “Peer-reviewed studies show that quick resilience classes don’t work.” While writing a book on resilience, he wrote, “I was struck by how many times people told me that the resilience course they were sent to wasn’t working. To reinforce his impression, he claims to have reviewed relevant published data, relying heavily on that of Jesse Singal (2021) The quick fix.

Neither Singal nor Daisley have come close to “completely reviewing the published data” on resilience training, much of which, as they note, is based on my work. Instead, they cite only a few small old studies, which were ambiguous. But they didn’t review the four recent, much larger meta-analyses of positive psychology interventions and resilience training at all. Or maybe they did but chose not to tell readers about it.

Let’s look at the evidence:

Effectiveness of Positive Psychology Interventions (PPI)

Daisley and Singal tell their readers of nothing but a scattering of negative evidence. They fail to share scores from controlled studies showing that PPIs work. Fortunately, there is a method, the meta-analysis, to evaluate all the existing studies together.

In 2020, The Journal of Positive Psychology published the most comprehensive meta-analysis of PPIs. Carr et al (2020) reviewed 347 studies involving more than 72,000 participants from clinical and non-clinical populations in 41 countries. The magnitude of effect of PPIs with an average of 10 sessions over six weeks offered in multiple formats and settings was assessed. At post-test, PPIs had significant “small” to “medium” effects on well-being (g = 0.39), strengths (g = 0.46), quality of life (g = 0, 48), depression (g = −0.39), anxiety (g = −0.62) and stress (g = −0.58). The gains were maintained at three months of follow-up.

The lay reader may not be familiar with the terms “small”, “medium”, and “large” to describe effect sizes. An effect size is the mean difference between two populations divided by the standard deviation of the entire population. Effects in therapy are usually ‘small’ or ‘medium’: researchers rejoice when ‘medium’ effects occur, and ‘large’ effects of drugs or psychotherapy are very rare. The sizes of the effect in prevention, such as these studies, are “low” (very rarely “average”) or more generally non-existent. Thus, a “small” effect size in the prevention of psychological problems is not pejorative. This is a good result; the best that can be expected.

In addition to this unmentioned 2020 meta-analysis, Daisley and Singal do not tell readers about three comprehensive and recent meta-analyses showing that resilience programs work:

Ma, Zhang, Huang, & Cui (2020) published a comprehensive meta-analysis, in the Affective Disorders Diary. They found these programs effective, reviewing 38 controlled studies, including 24,135 people. After the intervention, the mean effect size was significant, and subgroup analyzes revealed significant effect sizes for programs administered to universal and targeted samples, programs with and without homework, and programs led by teachers. The mean effect size was maintained at 6 months of follow-up, and subgroup analyzes indicated significant effect sizes for programs administered to targeted samples, programs based on the Penn Resiliency Program, programs with homework and programs led by professional speakers.

Similarly, the meta-analysis by Ahlen, Lenhard and Ghaderi (2015) in The Journal of Primary Prevention reported 30 randomized studies that met their strict inclusion criteria of peer-reviewed, randomized, or cluster-randomised trials of universal interventions for anxiety and depressive symptoms in school-aged children. There were “small”, but significant, effects for symptoms of anxiety and depression as measured at immediate posttest. At follow-up, which ranged from 3 to 48 months, the effects were significantly greater than zero for depressive but not anxious symptoms.

The meta-analysis by Dray, Bowman et al (2017) in The Journal of the American Academy of Child and Adolescent Psychiatry reviewed 49 studies. For all trials, resilience-focused interventions were effective compared to a control group in reducing depressive symptoms, internalizing problems, externalizing problems and general psychological distress. For trials in children (meta-analyses for 6 outcomes), interventions were effective for symptoms of anxiety and general psychological distress. For adolescent trials (meta-analyses for 5 outcomes), interventions were effective in internalizing problems.

Daisley and Singal also fail to mention Seligman, Allen, Vie et al (2019), a recent predictive study of over 70,000 troops deployed to Iraq and Afghanistan between 2009 and 2013. It is highly relevant to resilience programming. We attempted to predict, based on pre-existing psychological variables, who would suffer from PTSD after deployment and combat in Iraq or Afghanistan. This is the full cohort, not just a sample. About 5% developed diagnosed PTSD. Soldiers with the worst catastrophic thoughts were 29% more likely to develop PTSD than soldiers with average catastrophic thoughts, while soldiers lowest in catastrophic thoughts were 25% less likely to develop PTSD. Soldiers with high catastrophic thinking and experiencing high combat intensity were 274% more likely to develop PTSD than those without both. This suggests a major way to prevent PTSD: keep catastrophists away from intense combat. Reducing catastrophizing is an explicit target of PPIs – and this study showed that reducing it would likely prevent PTSD.

Another huge and relevant study – Lester, Stewart, Vie, et al, (2021) – attempted to predict heroism and exemplary job performance over a four-year period. The researchers measured a high positive effect (AP), a low negative effect (NA), and high optimism at baseline. Each of these variables predicted performance rewards and heroism rewards in a sample of 908,096 soldiers, in which 114,443 soldiers (12.6%) received a reward. These variables predicted almost four times higher rewards. This showed that three of the resilience variables targeted by the PPIs are major modifiable predictors for exemplary performance on the job in the military and for heroism on the battlefield.

Daisley and Singal’s assertion that “resilience programs don’t work” is false. Positive psychology interventions are very effective. PPIs preventing anxiety and depression, as well as resiliency programs, are backed by a great deal of scientific evidence. This evidence reliably shows reductions in depression, anxiety and stress and reliably increases in well-being in adults and children. These results emerge from many studies, in many settings, including the military, and with extremely large sample sizes. In sum, the evidence supporting the benefits of resilience programs and positive psychology interventions is massive, it is scientifically state-of-the-art, and it has been frequently replicated.

Comments are closed.