Science & Health

Success without substance

New Zealander Nigel Richards has won the French Scrabble championship twice. What’s more remarkable than double wins is that Nigel doesn’t speak French. He applied his prodigious brain to the task of memorising words from the French dictionary, bypassing the need for understanding.

In 2022, The Lancet medical journal achieved a feat that has parallels with Nigel’s Scrabble win. It achieved the highest ‘impact factor’ of any scientific journal in history. This number does not measure noble goals such as improvements in the length or quality of life. Instead, impact factor is a numbers game tangential to the purpose of medical research. Like Nigel’s Scrabble game, it is success without substance.

The impact factor is calculated using the total number of times scientists have referenced papers from the journal, so it measures attention, both good and bad. The average journal article in medicine gets 15 citations over 10 years. As an example of bad attention, The Lancet’s famously discredited study that linked the MMR vaccine to autism has more than 1,800 citations.

Citations are often inaccurate, with researchers referencing findings that were not in the paper, or misunderstanding the paper. Many citations are fleeting, with authors using long lists of citations as a way of demonstrating their in-depth knowledge of a field.

The desire to win the impact factor game has predictably led to bad behaviour, with journals seeking to inflate their impact factor by fiddling their data. Some citations are even corrupt, with journals and reviewers manipulating citations to make themselves look better.

Real improvements are what journals should aspire to, but measuring how a journal has helped people’s lives is extremely difficult. Instead, we are lumped with the impact factor that is simple to measure, but measures nothing of value.

To win the impact factor game, journals need to publish papers that will garner a lot of attention. This creates an incentive for journals to publish headline-grabbing papers about new breakthroughs, while important papers debunking existing treatments can find it harder to get in a ‘top’ journal.

Real improvements are what journals should aspire to, but measuring how a journal has helped people’s lives is extremely difficult.

This also creates an incentive for researchers to work on new and exciting breakthroughs. But there’s a lot of much-needed research on relatively mundane parts of health. For example, governments could save an enormous amount of money if they stopped providing treatments that have no scientific evidence.

One headline-grabbing paper published by The Lancet has had 5,878 citations – the Scrabble equivalent of playing ‘quiz’ on a triple letter score. This paper grabbed the headlines because it was an early study of the risks of dying from COVID-19 from a hospital in Wuhan, China. But this paper includes a serious flaw. The calculations excluded patients who were still alive, creating an enormous potential for reverse-survivor bias to skew the results.

For scientists, this flaw is easy to spot, so it’s not clear why it wasn’t spotted in the peer review process, where other experts check over the paper before publication. But The Lancet has also missed other flaws, for example another headline-grabbing COVID-19 paper that needed readers to point out impossibilities in the data. Possibly the urgency of the pandemic meant that peer review was trumped by attention.

It’s not just journals that are distracted by flimsy numbers. Scientists are also vulnerable to citation competitions rather than competing to do the best science. Scientists can boost their citation scores by citing their own work. Bigger boosts come from working together, and citation cartels have been discovered where groups of scientists make a pact to cite each other’s papers.

Universities are also prone to meaningless metrics, as they compete in international rankings tables, such as QSTimes Higher Education and ARWU. The idea that any university could be summed up using a single number is something that primary school children could understand. Nevertheless universities — the pinnacle of education — are enslaved to these numbers. And what are the tables based on? Citations feature heavily, and other numbers that are easy to measure but do not measure the quality of education.

But recently two heavyweight US law schools have withdrawn from the biggest league table used in the US. This is a bold decision as league tables can influence student numbers and higher education policy. The schools took a stand because the tables are “using a misguided formula that discourages law schools from doing what is best for legal education.”

Scientists, journals and universities have become slaves to misguided formulas based on meaningless data. Science is one of humanity’s proudest achievements, but scientists are human and have become distracted by prestige games that do nothing to advance science.

Now more than ever, science needs to be performing at its peak, as humanity deals with its biggest ever challenges. It’s time to stop the counting games and work on what really counts.

Originally published by 360info. It is republished under Creative Commons.

Photo by Brett Jordan on Unsplash.


About Adrian Barnett

Adrian Barnett is Professor at the Faculty of Health, School of Public Health & Social Work, Queensland University of Technology. He graduated from University College London with a BSc in Statistics in 1994. After that, he worked for SmithKline Beecham and the Medical Research Council as a statistician before coming to Australia to do a PhD. He completed his PhD in Mathematics in 2002.

Got a Comment?