Interpretation of statistics

Spin found in more than half of abstract conclusions in reports of negative trials. How often are negative trials made to look positive, and how is this done? A study in the Journal of the American Medical Association tried to answer these questions by examining all reports of parallel group trials with non-signif¬icant results on primary outcomes, which were published in December of 2006 and indexed in PubMed. In the 72 identified papers, 49 (68%) abstracts and 44 (61 %) main texts were found to have at least one distorted presentation or “spin,” which was defined as the use of specific reporting strategies, regardless of the motive, with the aim of presenting the experimental treatment as beneficial despite a statistically non-significant difference for the primary out¬come, or the aim of distracting the reader from statistically non-significant results.
In 13 articles a spin was found in the title. In abstracts, 27 (38%) results sections and 42 (58%) conclusion sections had at least one spin. In the main texts, this was 21 (29%),31 (43%), and 36 (50%) for the results, discussion, and conclusions sections, respectively.
The most common strategy of spin was to focus on positive results of analyses other than those for the primary outcome, such as within group comparisons, secondary outcomes, or subgroup analyses. Another strategy was to interpret P>O.05 as showing a similar effect of the studied treatments even though the trial was not designed as an equivalence or non-inferiority study. In some safety trials, non-significant results were wrongly interpreted as showing lack of harm for the experi¬mental treatment.
JAMA 2010;303:2058•64
BMJ 5th JUNE 2010 Vol 340 p 1218

blogger_blog:
www.nutrition-nutritionists.com
blogger_author:
Martin Eastwood
blogger_permalink:
/2010/06/interpretation-of-statistics.html
Back to top