When Researchers Undermine Their Own Evidence: The Emergence of “Reverse Spin Bias”
- Hannah Barnett
- 4 days ago
- 3 min read

Written by Hannah Barnett
A new study by O’Leary et al (2026) published in Research Integrity and Peer Review introduces a striking, unexplored phenomenon in academic research: “reverse spin bias.”
Unlike conventional spin bias, where authors frame nonsignificant results as significant or meaningful, reverse spin bias occurs when evidence of benefit is ultimately discounted or discredited by the very authors who produced it.
O’Leary et al observed this pattern while researching systematic reviews of several treatments, which are often still considered socially controversial. Among the ten reviews that reported evidence of benefit for e-cigarettes as a smoking cessation intervention, five recommended against their use and the remaining five declined to make any recommendation, despite reporting favourable findings in their own analyses. A similar pattern emerged in reviews of medical cannabis for pain management, where more than one-third of systematic reviews were found to exhibit reverse spin bias.
To the authors’ knowledge, this is the first time reverse spin bias has been formally identified and documented in literature.
How Reverse Spin Bias Works
By comparing systematic review findings with the authors’ recommendations, O’Leary et al identified several ways authors downplay or dismiss evidence of benefit:
1. Discounting the evidence base
Most commonly, authors described their own evidence as “inconsistent,” “low quality,” or “insufficient,” without noting the actual number of studies included or the strength of the aggregate findings.
2. Discrediting primary studies Statistically significant findings were dismissed as “small” or “limited,” or studies were labelled “low quality” without reference to standard risk-of-bias assessments.
3. Appealing to fear Some authors pointed to unclear or hypothetical future risks to downplay the benefits shown in their results.
4. Dismissing the treatment a priori In certain cases, treatments were rejected for reasons unrelated to their demonstrated clinical effects.
5. Omitting favourable findings In some reviews, positive outcomes observed in specific subgroups were excluded from the discussion or conclusions altogether.
Motivations for Reverse Spin Bias
O’Leary et al speculate that researchers may feel pressure to adopt reverse spin bias to:
Improve their chances of publication by avoiding conclusions that support controversial interventions
Align their conclusions with dominant professional, institutional, or ideological norms; or
Maintain consistency with previously stated positions on a given treatment
In this sense, the bias does not lie in the data itself, but in the discomfort of standing behind it.
Why This Matters for Drug Policy
Critical scrutiny of one’s own work is essential to good science. But when the same patterns of evidence discounting appear repeatedly and systematically, it raises deeper questions about how social pressures shape scientific interpretation. If reverse spin bias occurs when authors are predisposed toward null or negative conclusions, it becomes a real threat to evidence-based medicine and policy.
Evidence does not simply inform decisions, but legitimises them. When studies downplay the observed benefits of socially controversial interventions, policy can appear evidence-led while remaining norm-driven. Reverse spin bias may therefore create a feedback loop that reinforces stigma, delays harm reduction, and justifies inaction.
This is particularly consequential in drug policy contexts, where moral narratives have long shaped public discourse. Similar biases may be present in research on other interventions, such as safe opioid consumption sites. Recognising this bias is not about advocating for specific treatments; it is about ensuring that drug policy is shaped by evidence as it exists, not as it is most comfortable to present.
The goal of the study was to raise awareness of reverse spin bias by illustrating examples of it in operation. A wider investigation could help confirm how often it actually occurs and demonstrate how values, stigma, and controversy shape research translation. In the meantime, O’Leary et al advise editors and peer reviewers to remain vigilant to discrepancies between the findings of systematic reviews and the treatment recommendations that follow.




