Clinical trials and strength of significance

There is a very relevant Editorial in the BMJ of 8th March 2008.
In February 2008, Kirsch and colleagues reported a meta-analysis of the efficacy of antidepressants using data from clinical trials submitted to the Food and Drug Administration. They provocatively concluded, “there seems little evidence to support the prescription of anti depressant medication to any but the most severely depressed patients.”
In January this year. The authors of the Editorial Turner and Rosenthal published an article about the selective publication of antidepressant trials and its influence on apparent efficacy and also used FDA data. Our main finding was that antidepressant drugs are much less effective than is apparent from journal articles. From the Food and Drug Administration data we derived an overall effect size of 0.31. Kirsch and colleagues used FDA data from four of the 12 drugs we examined and calculated an overall effect size of 0.32.
Although these two sets of results were in excellent agreement, the authors interpreted the results quite differently. In contrast to Kirsch and colleagues’ conclusion that antidepressants are ineffective, Turner and Rosenthal concluded that each drug was superior to placebo. The difference in our interpretations stems from Kirsch and colleagues’ use of the criteria for clinical significance recommended by the UK’s National Institute for Health and Clinical Excellence (NICE).
Clinical significance is an important concept because a clinical trial can show superiority of a drug to placebo in a way that is statistically, but not clinically, significant. Tests of statistical significance give a yes or no answer (for example, P0.05 non-significant) that tells us whether the true effect size is zero or not, but it tells us nothing about the size of the effect.: In contrast, effect size does, gives clinical significance. Values of 0.2, 0.5, and 0.8 were proposed to represent small, medium, and large effects, respectively.1
NICE chose the “‘medium” value of 0.5 as a cut-off below which the benefit of a drug are not clini¬cally significant. This is problematic because there is no longer a continuous measure, but the answer is a yes or no measure. Suggesting that drug efficacy is either totally present or absent, even when comparing values as close together as 0.51 and 0.49. Kirsch and colleagues compared their effect size of 0.32 to the 0.50
cut-off and concluded that the benefits of antidepressant drugs were of no clinical significance.
But on what basis did NICE adopt the 0.5 value as a cut-off? When Cohen first proposed these landmark effect size values, he wrote, “The terms small”, ‘medium’, and ‘large’ are relative … to each other … the definitions are arbitrary . . . these proposed conventions were set forth throughout with much diffidence, qualifications, and invitations riot to employ them if possible.” He also said, “The values chosen had no more reliable a basis than my own intuition.” Thus, it seems doubtful that he would have endorsed NICE’s use of an effect size of 0.5 as an absolute test for drug efficacy.
They emphasise the importance of stating the clinical strength of recommendations.
This is in general not a common thought in nutrition. This is such a thought provoking paper
Turner and Rosenthal 2008 Efficacy of antidepressants BMJ vol 336, 516-7

blogger_blog:
www.nutrition-nutritionists.com
blogger_author:
Martin Eastwood
blogger_permalink:
/2008/03/clinical-trials-and-strength-of.html
Back to top