Why do authors need to report conflicts of interest when they publish a medical study? Lisa Rosenbaum has written an important series of essays in the New England Journal of Medicine (here, here, and here) on researchers and conflicts of interest. The venue matters. The NEJM is the world’s most influential medical journal. And it has been edited by a series of doctors — Relman, Kassirer, and Angell — who have been key leaders in the reform of medicine’s relationship with the pharmaceutical industry. But it appears that the journal may now be revising its views.
”˜Conflict of Interest’ refers to financial relationships between the authors of a research article and the manufacturer of the intervention being studied. Rosenbaum argued that there is an unfair and unwarranted prejudice against researchers who have such relationships, because the existence of a conflict of interest does not necessarily imply that the researcher is biased. The prejudice against researchers working with industry impedes the progress of research.
My US co-blogger Austin Frakt had a thoughtful post last week in response to Rosenbaum’s essays.Austin took her reasoning to a practical conclusion. He imagines himself reading an article and trying to evaluate its credibility. Medical journal articles typically have a footnote reporting meta data on the conflicts of interest reported by authors (e.g., ”œDr. Jones was a paid consultant to the medication’s manufacturer, Big Pharma Inc.”). Austin questions whether he should even read that footnote, because
Once I gather the meta data [about the authors’ conflicts of interest], what should I do with it?
Austin’s right. Just knowing that Jones consults to Big Pharma doesn’t help you evaluate whether Jones’ study is valid. I don’t think there is a fair or even effective way for an individual reader to use meta data about authors to evaluate an individual article. I don’t read those footnotes either.
Nethertheless, it is vital that those footnotes are there. Meta data are essential for meta analyses, which are systematic reviews of the effectiveness of research. Meta analyses statistically combine the results of many studies to summarize their data into a single estimate of the effect of a treatment. Moreover, they explore the heterogeneity of treatment effects, looking for differences between studies that may explain why the treatment seemed to work better in one study than another.
Meta analyses frequently find that treatment works better in industry-funded studies than in non-industry funded studies. A recent Cochrane Review of research of the effects of industry sponsorship on research reported that:
We found that drug and device studies sponsored by the manufacturing company more often had favorable results (e.g. those with significant P values) and conclusions than those that were sponsored by other sources. The findings were consistent across a wide range of diseases and treatments.
We can only see this pattern by looking across many studies using journal article meta data. Of course, the Cochrane reviewers’ conclusions can be disputed on empirical grounds. Which is, of course, the great thing about having the meta data, because with the meta data we’re not limited to our moral intuitions in evaluating the validity of the empirical literature,taken as a whole.
So here is one reason why reporting of conflicts of interest is essential: there is a substantial risk (not certainty) of industry bias in research reports. We need to track it and understand it, and we can’t do this without required disclosures of conflicts of interest. I expect that both Austin and Lisa Rosenbaum agree with me on this point.
There remains an important question about what we should do to correct for industry bias in research results, if and when it’s confirmed. Just briefly:
- Suppose a meta analysis of a treatment for a specific drug, say, finds that (a) that the treatment effect averaged across studies is greater than zero (i.e., the treatment works), but (b) industry-funded studies tended to report bigger treatment effects. Then I’d conclude that the average treatment effect is likely an overestimate. I’d be cautious in using it. I’d also conclude that we need more studies of the topic.*
- Suppose that many meta analyses find an association between larger treatment effects and industry funded studies, which, I believe, exists.* Then I’d conclude that we need to improve our research methodology. Such effort is already underway: many current reforms in the conduct and reporting of medical research””for example, the clinical trials registry””have been motivated in part by concerns about bias associated with industry funding.
What I wouldn’t conclude is that we should ban industry-funded clinical trials or ignore their findings entirely. Nor, without specific evidence of wrongdoing, would I assume that an industry-funded researcher is a shill or a fraud.
Let me add that Rosenbaum has raised many important questions about our moral attitudes toward researchers and their relationships with industry. We should continue to require conflict of interest reporting, but we should also have the discussion about moral attitudes that Rosenbaum calls for.
* Note that, as always, a simple correlation does not clarify the causal mechanisms that underlie it. An association between the size of a treatment effect and the study outcome needn’t imply that industry is cheating. For example, suppose that industry researchers more accurately target populations in which a treatment is likely to work or work better. Such targeting could either be viewed as ”œgaming” or as a means of providing useful, population-specific information. So a finding that industry-sponsored trials work better should open a question about what industry does differently. But””one more time””we can’t have that discussion without the meta data.