.

  • Written by Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M University
Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create

Early in the COVID-19 pandemic, researchers flooded journals[1] with studies about the then-novel coronavirus. Many publications streamlined the peer-review process for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.

However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used poor quality methods[2]. Several other[3] reviews of[4] studies published[5] in medical journals have also shown that much early COVID-19 research used poor research methods.

Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar used data[6] from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited over 11,000 times[7].

A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not strongly linked to research quality[8], since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.

Methodology matters

I am a public health researcher[9] with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement spreading misinformation[10] about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.

The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information.

For example, researchers have known for decades[11] that for studies evaluating the effectiveness of an intervention, a control group[12] is needed to know whether any observed effects can be attributed to the intervention.

Systematic reviews[13] pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.

Certain types of studies, such as one-time surveys of convenience samples that aren’t representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable caused a particular outcome[14].

Systematic reviews involve thoroughly identifying and extracting information from existing research.

All study designs have standards[15] that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.

Studies comparing[16] COVID-19 papers[17] with non-COVID-19[18] papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against dredging the data[19] to find “statistically significant” results that could be selectively reported.

Such methodological problems were likely overlooked in the considerably shortened[20] peer-review process[21] for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be 13 days, compared with 110 days[22] in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of about three weeks[23].

Publish-or-perish culture

These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.

Journals tend to favor positive, “novel” findings[24]: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.

Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side
Many researchers feel pressure to publish papers in order to advance their careers. South_agency/E+ via Getty Images[25]

Academics have worked in a publish-or-perish[26] incentive system[27] for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The flood of mixed-quality COVID-19 information[28] afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.

Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of today’s online[29] mega-journals[30] publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.

Healthy criticism

Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an open society[31] capable of solving complex problems such as a worldwide pandemic.

Publishing a large volume of misinformation disguised as science during a pandemic obscures true and useful knowledge[32]. At worst, this can lead to bad public health practice and policy.

Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves critically examining the quality[33] of a study’s designs, statistical methods, reproducibility and transparency, not the number of times it has been cited[34] or tweeted about.

Science depends on a slow, thoughtful and meticulous approach[35] to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.

Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample Rigorous science requires careful deliberation and attention, not haste. Assembly/Stone via Getty Images[36]

Public health heavily draws upon disciplines that are experiencing[37] replication[38] crises[39], such as psychology, biomedical science and biology. It is similar to these disciplines in terms of its[40] incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.

Reexamining how the discipline rewards its scholars and assesses their scholarship can help it better prepare for the next public health crisis.

References

  1. ^ flooded journals (doi.org)
  2. ^ poor quality methods (doi.org)
  3. ^ Several other (doi.org)
  4. ^ reviews of (doi.org)
  5. ^ studies published (doi.org)
  6. ^ used data (doi.org)
  7. ^ over 11,000 times (scholar.google.com)
  8. ^ strongly linked to research quality (doi.org)
  9. ^ public health researcher (scholar.google.com)
  10. ^ spreading misinformation (theconversation.com)
  11. ^ known for decades (www.sfu.ca)
  12. ^ control group (www.britannica.com)
  13. ^ Systematic reviews (doi.org)
  14. ^ caused a particular outcome (doi.org)
  15. ^ study designs have standards (www.equator-network.org)
  16. ^ Studies comparing (doi.org)
  17. ^ COVID-19 papers (doi.org)
  18. ^ with non-COVID-19 (doi.org)
  19. ^ dredging the data (doi.org)
  20. ^ considerably shortened (doi.org)
  21. ^ peer-review process (doi.org)
  22. ^ 13 days, compared with 110 days (doi.org)
  23. ^ about three weeks (doi.org)
  24. ^ positive, “novel” findings (doi.org)
  25. ^ South_agency/E+ via Getty Images (www.gettyimages.com)
  26. ^ publish-or-perish (doi.org)
  27. ^ incentive system (doi.org)
  28. ^ flood of mixed-quality COVID-19 information (theconversation.com)
  29. ^ today’s online (doi.org)
  30. ^ mega-journals (doi.org)
  31. ^ open society (doi.org)
  32. ^ obscures true and useful knowledge (doi.org)
  33. ^ critically examining the quality (doi.org)
  34. ^ number of times it has been cited (doi.org)
  35. ^ slow, thoughtful and meticulous approach (doi.org)
  36. ^ Assembly/Stone via Getty Images (www.gettyimages.com)
  37. ^ experiencing (doi.org)
  38. ^ replication (doi.org)
  39. ^ crises (doi.org)
  40. ^ in terms of its (doi.org)

Authors: Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M University

Read more https://theconversation.com/early-covid-19-research-is-riddled-with-poor-methods-and-low-quality-results-a-problem-for-science-the-pandemic-worsened-but-didnt-create-220635

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more