Spin is the manipulation of language to potentially mislead readers from the likely truth of the results. Within quantitative empirical research, such as randomized controlled trials, spin is defined as the “use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome [ie, inappropriate use of causal language], or to distract the reader from statistically nonsignificant results [ie, to focus on a statistically significant secondary result]” (1).
Spin can distort the production of knowledge and mislead readers and misguide decision and policy makers. Being aware of spin is tremendously important for readers of scientific papers, for researchers, for editors, for information specialists (synthesizing knowledge), and for users of scientific evidence, such as policy makers (2,3). Distorted presentation and interpretation of results have been revealed in the cardiovascular literature (4). While professional disagreement drives scientific progress, spin hampers it, as it frequently becomes static and entrenched. Attention and awareness can be ways to reduce the problem. It is an interesting issue whether spin borders to misconduct, as it can involve misleading and manipulation although spin is not directly related to money or profit.
Students, PhD Students, Researchers, Supervisors, Postdocs, Journal editors, Industry stakeholders, Junior researchers, Senior researchers, General public
Open data practices can help increase transparency, allowing other researchers and interested parties to undertake their own analyses.
A technique to identify and classify spin in RCT reports has been devloped by Boutron et al (5,6) focusing on RCTs reporting statistically nonsignificant primary outcomes because the interpretation of these results is more likely to be subject to prior beliefs of effectiveness, leading to potential bias in reporting. Similar approaches are available to systematically assess the explicit presentation of nonsignificant results in trial reports in various subspecialties, such as described by Lockyer et al, and Turrentine (7,8).
1. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010;303(20):2058-2064. doi:10.1001/jama.2010.651
2. Yank V, Rennie D, Bero LA. Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study. BMJ. 2007;335(7631):1202-1205. doi:10.1136/bmj.39376.447211.BE
3. Hewitt CE, Mitchell N, Torgerson DJ. Listen to the data when results are not significant. BMJ. 2008;336 (7634):23-25. doi:10.1136/bmj.39379.359560.AD
4. Khan, Muhammad Shahzeb, et al. "Level and Prevalence of Spin in Published Cardiovascular Randomized Clinical Trial Reports With Statistically Nonsignificant Primary Outcomes: A Systematic Review." JAMA Network Open 2.5 (2019): e192622-e192622.
5. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010;303(20):2058-2064. doi:10.1001/jama.2010.651
6. Hewitt CE, Mitchell N, Torgerson DJ. Listen to the data when results are not significant. BMJ. 2008;336(7634):23-25. doi:10.1136/bmj.39379.359560.AD
7. Lockyer S, Hodgson R, Dumville JC, Cullum N. “Spin” in wound care research: the reporting and interpretation of randomized controlled trials with statistically non-significant primary outcome results or unspecified primary outcomes. Trials. 2013;14:371. doi:10.1186/1745-6215-14-371
8. Turrentine M. It’s all how you “spin” it: interpretive bias in research findings in the obstetrics and gynecology literature. Obstet Gynecol. 2017;129(2):239-242. doi:10.1097/AOG.0000000000001818
Bjørn Hofmann contributed to this theme.
Latest contribution was May 29, 2019