BMJ: Time to Assume That Health Research Is Fraudulent Until Proven Otherwise?
Real science is never settled, and anyone who has certainty on such things is not qualified to discuss it.
re: ‘Replication crisis’ spurs reforms in how science studies are done
re: ethics in medicine
Feeling confident about medical advice from your doctor for that new drug or procedure?
Time to assume that health research is fraudulent until proven otherwise?
5 July 2021. Emphasis added. Author Richard Smith was the editor of The BMJ until 2004.
Health research is based on trust. Health professionals and journal editors reading the results of a clinical trial assume that the trial happened and that the results were honestly reported. But about 20% of the time, said Ben Mol, professor of obstetrics and gynaecology at Monash Health, they would be wrong. As I’ve been concerned about research fraud for 40 years, I wasn’t that surprised as many would be by this figure, but it led me to think that the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction.
...We have long known that peer review is ineffective at detecting fraud, especially if the reviewers start, as most have until now, by assuming that the research is honestly reported...
...We have now reached a point where those doing systematic reviews must start by assuming that a study is fraudulent until they can have some evidence to the contrary. Some supporting evidence comes from the trial having been registered and having ethics committee approval. Andrew Grey, an associate professor of medicine at the University of Auckland, and others have developed a checklist with around 40 items that can be used as a screening tool for fraud (you can view the checklist here).
...Research fraud is often viewed as a problem of “bad apples,” but Barbara K Redman, who spoke at the webinar insists that it is not a problem of bad apples but bad barrels if not, she said, of rotten forests or orchards. In her book Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach @AMAZON she argues that research misconduct is a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes...
WIND: right off the bat 20% is outright fraud of some sort. Of the remaining 80%, how much is solid science free of conflicts of interest? How much is well done enough to trust? We may be seeing junk science as high as 90% of studies (see below).
Trust the science? Trust the data? Seriously?! That’s for the gullible and has been for a long time now, at least when it comes to medicine. But it surely affects most areas of science and any area where money or status or politics are involved. And money is always involved, if only research grants. Throw in Big Pharma and the corrupt FDA... good luck with that.
In the area of medicine, if there are not at least a bare minimum of two independent double-blind studies free of all conflicts of interest (rare), it should be considered junk science. IMO, with fewer than four independent double-blind studies, it’s not much more than speculation.
What does all this say about COVID vaccines, with their $100 billion financial incentive to not find problems?
Reason.com: How Much Scientific Research Is Actually Fraudulent?
RONALD BAILEY | 9 July 2021
Fraud may be rampant in biomedical research. My 2016 article "Broken Science" pointed to a variety of factors as explanations for why the results of a huge proportion of scientific studies were apparently generating false-positive results that could not be replicated by other researchers. A false positive in scientific research occurs when there is statistically significant evidence for something that isn't real (e.g., a drug cures an illness when it actually does not). The factors considered included issues like publication bias, and statistical chicanery associated with p-hacking, HARKing, and underpowered studies. My article did not address the possibility that the lack of reproducibility could be because a significant proportion of preclinical and clinical biomedical studies were actually fraudulent.
My subsequent article, "Most Scientific Findings Are False or Useless," which reported the conclusions of Arizona State University's School for the Future of Innovation in Society researcher Daniel Sarewitz's distressing essay, "Saving Science," also did not consider the possibility of extensive scientific dishonesty as an explanation for the massive proliferation of false positives. In his famous 2005 article, "Why Most Published Research Findings Are False," Stanford University biostatistician John Ioannidis cited conflicts of interest as one factor driving the generation of false positives but also did not suggest that actual research fraud was a big problem.
How bad is the false-positive problem in scientific research? As I earlier reported, a 2015 editorial in The Lancet observed that "much of the scientific literature, perhaps half, may simply be untrue." A 2015 British Academy of Medical Sciences report suggested that the false discovery rate in some areas of biomedicine could be as high as 69 percent. In an email exchange with me, Ioannidis estimated that the nonreplication rates in biomedical observational and preclinical studies could be as high as 90 percent.
...Summarizing their results, an article in Science notes, "More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature. And one in 12 [8 percent] admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results." Daniele Fanelli, a research ethicist at the London School of Economics, tells Science that 51 percent of researchers admitting to questionable research practices "could still be an underestimate."
In an editorial, Ioannidis observes that the zombie anesthesia trials added up to "100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan." Taking the number of clinical trials from these countries listed with the World Health Organization's registry and extrapolating from the false trial rates identified by Carlisle, Ioannidis estimates that there are "almost 90,000 registered false trials from these countries, including some 50,000 zombies." Consequently, he concludes that "hundreds of thousands of zombie randomised trials circulate among us." Since randomized controlled trials are the gold standard for clinical research, Ioannidis adds, "One dreads to think of other study designs, for example, observational research, that are even less likely to be regulated and more likely to be sloppy than randomised trials."
WIND: it’s not just the study, it’s who is doing it.
Follow the science? Follow the data? Great mantra for manipulating the masses, so that the government and media can easily impose their will on all matters of policy..
“Publication bias” refers to studies never published because a desired outcome was not seen. It is a huge problem, particularly in medical studies.
Many things could be done to address the problem, here are just a few:
- No drug or device should be legally sold until and unless the entire data set has been made feely available freely to anyone for at least 6 months.
- Publication bias should be rooted out by requiring publication of all studies.
- Research fraud in medical trials should be a felony.
Of crouse, none of this applies to climate science, since it’s a rigorous marketplace of ideas. That’s why the science there is settled (see quote at top of article).
See also: Check for publication integrity before misconduct.