Did you know that the wildfires that ravaged Hawaii last summer were caused by a secret “weather weapon” being tested by the US military, and that US NGOs spread dengue fever in Africa? m shopping on Fifth Avenue in Manhattan? Or that Narendra Modi, the Prime Minister of India, is endorsed in a new song by Mahendra Kapoor, an Indian singer who died in 2008?
These stories are all fake, of course. These are examples of disinformation: untruths intended to mislead. Such powerful stories are being spread around the world through increasingly sophisticated campaigns. Whizzy artificial intelligence (AI) tools and complicated networks of social media accounts are used to create and share eerily convincing photos, video and audio, confusing fact with fiction. In a year when half the world is holding elections, this fuels fears that technology will make it impossible to combat disinformation, fatally undermining democracy. How concerned should you be?
The Internet has made the problem much worse. False information can be spread on social media at low cost; AI also makes it cheap to produce. Much about disinformation is murky. But in a special Science and Technology section, we trace the complex ways in which it is seeded and spread through networks of social media accounts and websites. For example, Russia's campaign against Ms. Zelenska started as a video on YouTube, then spread across African fake news websites and was boosted by other sites and social media accounts. The result is a deceptive veneer of plausibility.
Spreader accounts build followers by posting about football or the British royal family, gaining trust before mixing in misinformation. Much of the research into disinformation tends to focus on a specific topic on a particular platform in one language. But it turns out that most campaigns work in similar ways. For example, the techniques used by Chinese disinformation operations to malign South Korean companies in the Middle East are remarkably similar to those used in Russia-led efforts to spread falsehoods across Europe.
The goal of many operations is not necessarily to support one political party over another. Sometimes the goal is simply to pollute the public sphere, or sow distrust in the media, governments, and the idea that the truth is knowable. Hence the Chinese fables about weather weapons in Hawaii, or Russia's attempt to conceal its role in the shooting down of a Malaysian airliner by promoting several competing narratives.
All this raises concerns that technology, by making disinformation unbeatable, will threaten democracy itself. But there are ways to minimize and control the problem.
Encouragingly, technology is as much a force for good as it is for evil. While AI makes disinformation much cheaper to produce, it can also help with tracking and detection. Even as campaigns become more sophisticated, with each spreader account varying its language just enough to be plausible, AI models can detect stories that are similar. Other tools can detect unreliable videos by identifying fake audio or looking for signs of real heartbeats, as evidenced by subtle variations in the skin color of people's foreheads.
Better coordination can also help. In some ways, the situation is analogous to climate science in the 1980s, when meteorologists, oceanographers and earth scientists could tell something was going on, but each could see only part of the picture. Only when they were brought together did the full extent of climate change become clear. Likewise, academic researchers, NGOs, technology companies, media outlets, and government agencies cannot tackle the problem of disinformation alone. With coordination, they can share information and discover patterns, allowing tech companies to label, muzzle or remove misleading content. For example, Facebook's parent company Meta shut down a disinformation operation in Ukraine in late 2023 after receiving a tip from Google.
But deeper understanding also requires better access to data. In today's world of algorithmic feeds, only technology companies can see who is reading what. Under US law, these companies are not required to share data with researchers. But the new European law on digital services mandates data sharing and could be an example for other countries. Companies concerned about sharing classified information could have researchers run programs to run, rather than sending data for analysis.
Such coordination will be easier to achieve in some places than in others. Taiwan, for example, is considered the gold standard for tackling disinformation campaigns. It helps that the country is small, trust in the government is high and the threat from a hostile foreign power is clear. Other countries have fewer resources and weaker trust in institutions. Unfortunately, polarized politics in America means that coordinated efforts to combat disinformation are being portrayed as evidence of a vast left-wing conspiracy to silence right-wing voices online.
A fact of one person…
The dangers of disinformation must be taken seriously and closely studied. But keep in mind that they are still uncertain. So far, there is little evidence that disinformation alone can influence the outcome of elections. For centuries, there have been people who spread false information, and people who wanted to believe that information. Yet societies have usually found ways to deal with it. Today, disinformation can take a new, more sophisticated form. But it has not yet revealed itself as an unprecedented and untouchable threat.
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com