We’ve entered the post-truth era, where facts are only hearsay.
Over the last few years, the dissemination of misinformation has become a cultural phenomenon. “Fake news” and “alternative facts” stand accused of influencing the US election outcome, of misleading people over Brexit, the NHS, and fuelling climate change conspiracies and holocaust denials.
Behind it’s inexorable rise: the democratisation of the internet, the backlash against social and political elitism, content created with the purpose of confirming biases, the echo-chamber effect of social media and its ability to manipulate and entrench, distort and polarise opinion.
Much of the blame has been laid at technology’s door, with Facebook copping most of the flak. And yet despite Mark Zuckerberg being hauled before Congress for a five-hour face-off, what’s being done? Has the fightback even begun?
Facebook, under the most pressure, has put together specialist teams to stop the spread of misinformation and propaganda. And under new rules, the purchase of political adverts on the site will require identity and location verification. Google has announced that it will commit $300 million over the next three years to help stamp out the problem. Meanwhile Twitter has gone the other way, with a reluctance to remove false reports, stating that it would be dangerous for staff to serve as “arbiters of truth”.
Away from the social media giants, research centres are looking into solutions to combat computational propaganda. A new project from MIT’s CSAIL (Computer Science and Artificial Intelligence Lab) and the QRCI (Qatar Computing Research Institute) aims to identify sources of fake news before they can spread. The intention is to identify and target sites that spread misinformation on a consistent basis.
In an extensive study of the project, researchers trained a machine-learning model on different combinations of over 900 possible variables. But even the best model labelled news outlets’ factuality with ratings of “low”, “medium” or high” accurately only 65 percent of the time. It just shows that there is still a way to go yet before technology can intervene.
Offline, governments are turning to the law to scotch the spread of fake news. Much of this seems reactionary, a response by legislators that consider companies such as Facebook and Google to have dismissed past concerns out of hand. Malaysia has passed laws that include fines of up to £88,000 and six-year jail terms. Under Thailand’s cybersecurity law, you can be incarcerated for seven years for spreading falsehoods. The Philippines is mulling over anti-fake news legislation that would see offenders receive 20 years.
And here begins the problem. There’s no real strategy for addressing the issue. Different institutions are taking different actions, with next to no decisive consensus. Right now, the issue of fake news raises more questions than it answers. Is it a problem for the social media giants, for technologists, or for the government? Do such laws represent a restriction of free speech? Is this an assault on journalism? Are we witnessing a first step in the direction of Internet regulation?
The truth is that no-one yet has a definitive answer. So, where does that leave us? Well, what’s likely to happen is that social media companies, technologists and policymakers will continue to tackle fake news in different ways.
But the biggest concern — maybe bigger even than the spread of fake news itself — is the prospect of governments trying to legislate their way out of the problem. It’s difficult to see a version of these laws that doesn’t clamp down on journalism and curb free speech. As a result, the best we can hope for is that technology finds a way to combat the problem it has been largely partly responsible for creating.