“A lie can travel halfway around the world before the truth can get its boots on.” This quote appears in many forms. In some variants, the quote involves footwear. In other cases, the truth is struggling to get its pants on.
Regardless of the details, the sentiment encapsulates a key challenge of misinformation. By the time the meticulous task of fact-checking is complete and the correction has been disseminated, the misinformation has already spread widely and achieved all sorts of mischief.
Consequently, misinformation researchers speak wistfully of the “holy grail of fact-checking” – automatically detecting and debunking misinformation in one fell swoop. Machine learning offers the potential of both speed and scale – the ability to identify misinformation the instant it appears online, and the technical capacity to distribute solutions at the scale required to match the size of the problem.
But the holy grail quest faces a seemingly insurmountable hurdle. Misinformation evolves and sprouts new forms. How can you detect a myth before you even know what it is or what form it will take?
Misinformation and climate change
When it comes to misinformation about climate change, you often hear the terms “whack-a-mole” or “climate zombies” – typically expressed through clenched teeth. These refer to the fact that climate myths never seem to die, persistently rearing up to be debunked over and over. Indeed, the misleading arguments found in climate misinformation in the early 1990s are the same myths we now hear in 2021.
While this can be annoying, climate zombies present a research opportunity. The fact that climate misinformation shows so much stability makes it possible to train a machine to detect misinformation claims.
A number of years ago, myself and my colleagues Travis Coan and Mirjam Nanko from Exeter University, as well as Constantine Boussalis from Trinity College Dublin, began our quest for the fact-checking holy grail – specifically focused on misinformation about climate change.
The first step in this process was building a taxonomy of contrarian claims. As we developed and refined the many claims we were seeing in climate misinformation, five main categories became clear – it’s not happening; it’s not us; it’s not bad; solutions won’t work; and experts are unreliable.
These five categories of climate misinformation are noteworthy because they directly mirror the five key climate beliefs developed from survey data by Ed Maibach – it’s happening; it’s us; it’s bad; there’s hope; and experts agree. Consequently, we called our five categories of climate misinformation the five key climate disbeliefs.
Once we had our taxonomy, it was time to roll up our sleeves and start training the machine.
The principle of supervised machine learning is straightforward – take a paragraph of text from known sources of climate misinformation, and match it to a contrarian claim in our taxonomy (if there is a match). Then repeat that same process tens of thousands of times, until our machine is sufficiently trained to detect each misinformation claim. (Easy, right?).
Fortunately, we were able to draw upon the help of the climate-literate Skeptical Science team (which had form on crowd-sourcing content analysis of large climate datasets).
Once we had trained our machine to detect and categorise different misinformation claims, we fed our model 20 years’ worth of climate misinformation – more than 250,000 articles from 20 prominent conservative think-tank websites and 33 blogs. It’s the largest content analysis to date on climate misinformation, making it possible to construct a two-decade history of climate misinformation.
The results weren’t what I expected at all.
The erosion of public trust in climate scientists
During the past 15 years, I’ve been debunking scientific climate misinformation – the type of myths that fell under the categories “it’s not happening”, “it’s not us”, or “it’s not bad”.
It turns out these were the least common forms of climate misinformation. Instead, the largest category of climate misinformation was attacks on scientists and on climate science itself.
Climate misinformation isn’t about providing its own alternative explanation of what’s happening to our climate. Instead, it’s focused on casting doubt on the integrity of climate science, and eroding public trust in climate scientists.
This has significant consequences for scientists, educators, and fact-checkers. The majority of our efforts have focused on debunking scientific myths such as “global warming isn’t happening” or “climate change is caused by the sun”.
But that’s not where misinformation is focused – the focus is on attacking scientists and science itself. There’s a dearth of research into understanding and countering this type of misinformation, let alone public engagement and education campaigns to counter their damage.
Another strong trend was a growing prevalence of misinformation targeting climate solutions – claims that climate policies were harmful, attacking renewables, or spruiking fossil fuels. This category is becoming an increasingly dominant proportion of climate misinformation. This is particularly the case with conservative think-tanks, which tend to focus more on climate policy than science denial.
The overall pattern in our data is clear – solutions denial is the future of climate misinformation.
Our research was recently published in the Nature journal Scientific Reports. This was an important first step on our quest for the fact-checking holy grail. The next step is to synthesise our machine learning research with critical thinking research into deconstructing and analysing climate misinformation.
This task requires bringing together the vastly different disciplines of computer science and critical thinking philosophy. This is challenging, but interdisciplinary solutions are essential when dealing with complex, interconnected issues like misinformation.
We still have a long way to go, but for now it’s important to recognise the lessons already learnt while pursuing this quest. Not to mention the friends made along the way.
This article was originally published on Monash Lens.
Photo by Mika Baumeister on Unsplash.