This article has been translated into French.
Public trust in science is crucial to the good functioning of contemporary societies. When used by elected officials and policy-makers, science supports and helps justify difficult choices, such as economic changes to combat global warming or constraints on individual liberties in the name of public health.
But for science to be trustworthy, researchers must demonstrate responsible behaviour. They must conduct rigorous studies, reduce sources of bias and be truthful in their research publications. When prestigious scientific journals retract studies resulting in scandal it’s reasonable to become concerned. A recent retraction of a high-profile COVID-19 study in the prestigious medical magazine The Lancet in May could undermine the public’s trust in science. But should it?
Lancetgate
In early 2020, the COVID-19 pandemic hit North America and the drug hydroxychloroquine was touted by certain experts, commentators and politicians as a “miracle” therapy that could treat and even prevent some of the worst symptoms of the disease.
This public attention was followed by a surge in demand for the drug, which had been used since the 1950s to treat malaria, and later for rheumatological conditions. Many scientists warned that there was little evidence supporting the use of hydroxychloroquine for COVID-19 and given the crisis and urgent need for effective treatments, the World Health Organization (WHO) coordinated clinical trials to evaluate its safety and efficacy (along with other promising drug treatments).
On May 22, 2020, a study on hydroxychloroquine was published in The Lancet by Mandeep Mehra from Brigham Women’s Health and Harvard, as well as his colleagues Sapan Desai from the Southern Illinois University School of Medicine and Amit Patel from the University of Utah. This study suggested that hydroxychloroquine was not effective against COVID-19 and could even be harmful, causing serious heart problems and decreasing in-hospital survival rates.
Based on this information, many countries terminated their hydroxychloroquine clinical studies. But other researchers questioned the quality and accuracy of the study, in part because it was based on a questionable dataset. The numbers presented in the dataset did not match the officially published national data. It also included data from an impressive 671 hospitals on six continents, which would have been likely impossible within that short timeframe. The database was developed by Surgisphere Corporation, a small private company owned by one of the authors (Desai), which also raises concerns about potential conflicts of interest.
The Lancet launched an independent review to evaluate the study but was refused access to Surgisphere’s data, making it impossible to demonstrate its truthfulness. During this time, other studies had been using the Surgisphere information.
As a result, a research study on blood pressure medication and COVID-19 was retracted from the New England Journal of Medicine. As well, a decision-making tool developed using the Surgisphere data and distributed in 26 African countries was discontinued. The consequences of a retraction were significant for the scientific community but also for professionals, policy-makers and the public.
How do retractions happen?
The retraction of scientific journal publications are unfortunately not rare and have been increasing in recent years, partially because of increased attention and better editorial oversight. About half of retractions are due to falsification, fabrication or plagiarism. The other half are the result of honest mistakes.
More problematic is that approximately two percent of researchers have admitted that they have falsified, fabricated or modified data, but these studies have not been retracted. In other words, questionable and unreliable research remains in the scientific literature and so can influence other research and policy decisions.
“Publish or perish” is a familiar catchphrase to describe the fact that researchers are expected to show productivity with published work. In addition, in order to respond to the very real public safety needs resulting from the current pandemic, the whole research system has been forced to move faster. Increased funding and selection of projects, to quicker research ethics review, peer review and publication of studies has been the outcome. Although some researchers will likely try to take advantage of this situation, most feel an individual and collective responsibility to help by conducting good research that makes a difference for society.
Can we actually trust scientific research if it is fallible?
Fortunately, science does not rely on only a few studies or publications. In any field, there are numerous studies being conducted, and they are evaluated and critiqued by other experts in the field. Over time, results can be validated or invalidated, conclusions can be drawn and knowledge produced.
COVID-19 has arguably changed this knowledge development process. In the media, we now regularly see stories discussing preliminary findings from “pre-clinical studies” that show some sort of hope for a particular therapy, with the implication that a cure is “just around the corner.”
To make matters worse, there is also significant political pressure to develop vaccines in record time, even if this mean cutting corners. Not only does this create unrealistic expectations, it may actually limit scientific creativity and innovation and result in poorly designed or conducted studies that then do not translate into safe and effective vaccines.
The reality of science is that it takes time to explore the unknown, to determine what works and what doesn’t. Apparent “false starts” and “dead ends” that result in study failures are a normal part of a process that, when done honestly, leads to discoveries and innovations that have changed the world.
Why should we talk about limitations of scientific research?
Sound health-policy decisions – whether to do with public health confinement or drug development – are justified when they are based on trustworthy science and a transparent presentation choice. The Lancetgate scandal and the story of hydroxychloroquine points to the problems that arise when scientists are not trustworthy.
But in this specific case, the scientific process and its self-correcting system of community critique did correct the scientific record. When researchers read the Surgisphere-based papers in scientific journals, they wrote to the editor and went public calling for the paper to be retracted.
It is actually more problematic when science does not correct its own mistakes and sweeps mistakes under the rug. The very transparency that is at the heart of the scientific process, along with the honesty to state when things do not work, is what enables the production of trustworthy knowledge. This advances the public good, and this is how scientists show that they can and should be trusted.