Hindering medical knowledge and public health.
What if you were to find that research into SSRIs discarded negative findings and tended to publish on positive findings? And that negative findings were written up with a positive slant? What if research in this area was found to be actually hindering medical knowledge and public health?
These were the findings of an article in tomorrow’s New England Journal of Medicine (citation below). Thanks to Ken Pope for bringing the article to my attention.
Here’s three exerpts:
Medical decisions are based on an understanding of publicly reported clinical trials. If the evidence base is biased, then decisions based on this evidence may not be the optimal decisions. For example, selective publication of clinical trials, and the outcomes within those trials, can lead to unrealistic estimates of drug effectiveness and alter the apparent risk-benefit ratio.
And from the discussion section:
We found a bias toward the publication of positive results. Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome. We analyzed these data in terms of the proportion of positive studies and in terms of the effect size associated with drug treatment. Using both approaches, we found that the efficacy of this drug class is less than would be gleaned from an examination of the published literature alone. According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. The statistical significance of a study’s results was strongly associated with whether and how they were reported, and the association was independent of sample size. The study outcome also affected the chances that the data from a participant would be published. As a result of selective reporting, the published literature conveyed an effect size nearly one third larger than the effect size derived from the FDA data.
And take a look at the summary:
“Selective reporting deprives researchers of the accurate data they need to estimate effect size realistically. Inflated effect sizes lead to underestimates of the sample size required to achieve statistical significance. Underpowered studies — and selectively reported studies in general — waste resources and the contributions of investigators and study participants, and they hinder the advancement of medical knowledge. By altering the apparent risk- benefit ratio of drugs, selective publication can lead doctors to make inappropriate prescribing decisions that may not be in the best interest of their patients and, thus, the public health.”
Turner, E.H., Matthews, A.M., Linardatos, E., Tell, R.A., Rosenthal, R. (2008). “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy” by Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., & Robert Rosenthal, Ph.D. New England Journal of Medicine, 358(3), 252-260.
Kalea Chapman, Psy.D.