This year, Facebook and Twitter allowed a video of a lecture to be distributed on their platforms in which Michael J. Knowles, a right-wing pundit, called for the eradication of “transgenderism”. The Conservative Political Action Coalition, which hosted the talk, said in its social media posts promoting the video that the talk was “all about the left’s attempt to erase biological women from modern society.”
None of this was censored by the tech platforms because neither Mr. Neither Knowles nor CPAC violated the platforms’ hate speech rules that prohibit direct attacks on people based on who they are. But by allowing such statements to spread on their platforms, the social media companies did something that should perhaps worry us even more: they fueled fears of a marginalized group.
Almost all technology platforms developed extensive and detailed rules prohibiting hate speech after discovering that the first thing that happens on a new social network is users start throwing swear words at each other. The European Union is even monitoring the rate at which tech platforms remove hate speech.
But fear is weaponized even more than hatred by leaders who want to provoke violence. Of course, hate is often part of the equation, but fear is almost always the key ingredient when people feel they have to lash out to defend themselves.
Understanding the distinction between fear-mongering and hate speech is critical as we collectively grapple with how to govern global internet platforms. Most tech platforms don’t close false terrifying claims like “Antifa is coming to invade your city” and “Your political enemies are pedophiles coming to get your kids.” But by allowing such lies to spread, the platforms allow the most dangerous forms of speech to permeate our society.
Susan Benesch, executive director of the Dangerous Speech Project, said genocidal leaders often use fear of an impending threat to drive groups toward pre-emptive violence. Those who commit the violence need not hate the people they attack. They just have to fear the consequences of not attacking.
For example, before the 1994 Rwandan genocide, Hutu politicians told Hutus that they were about to be exterminated by Tutsis. During the Holocaust, Nazi propagandists declared that the Jews intended to exterminate the German people. Before the genocide in Bosnia, Serbs were warned to protect themselves against a fundamentalist Muslim threat plotting genocide against them.
“I was stunned at how similar this rhetoric is on a case-by-case basis,” Ms. Benesch told me in an interview for The Markup. “It’s like there’s this horrible school they all go to.” The main characteristic of dangerous speech, she argued, is that it “persuades people to see other members of a group as a terrible threat. That makes violence acceptable, necessary or even virtuous.”
Fear speech is much less studied than hate speech. In 2021, a team of researchers from the Indian Institute of Technology and MIT published the first large-scale quantitative study of anxiety speech.
In an analysis of two million WhatsApp messages in public Indian chat groups, they found that fear speech was remarkably difficult to detect by automated systems because it does not always contain the derogatory words typical of hate speech. “We have determined that many of them are based on factually incorrect information designed to mislead the reader,” wrote the article’s authors, Punyajoy Saha, Binny Mathew, Kiran Garimella and Animesh Mukherjee. (Mr. Garimella moved from MIT to Rutgers after the paper’s publication.) Human judgment is often needed to distinguish between true fears and false fears, but the tech platforms often don’t spend the time or develop the local knowledge to address all fears. to be investigated is expressed.
Three of the authors and a new contributor, Narla Komal Kalyan, followed up this year with a comparison of fear and hate speech on the right-wing social network Gab. The “non-toxic and argumentative nature” of fear speech leads to more engagement than hate speech, they found.
So how do we vaccinate ourselves against fear-based social media statements that can incite violence? The first steps are to identify and recognize it for the cynical tactic it is. Tech platforms should invest in more people who can fact-check and add context and counterpoints to messages that instill false fear.
Another approach is to decouple social media platforms from their reliance on the so-called engagement algorithms designed to keep people on the site for as long as possible – which also ultimately reinforce excessive and divisive content. The European Union is already taking a step in that direction by requiring large online platforms to offer users at least one alternative algorithm.
Ravi Iyer, who worked on the same team as Facebook whistleblower Frances Haugen, is now the general director of the Psychology of Technology Institute. A joint project between the Neely Center at the University of Southern California Marshall School of Business and the Haas School of Business at the University of California, Berkeley, is exploring how technology can be used to benefit mental health. In a concept paper published last month, he, Jonathan Stray and Helena Puig Larrauri suggested that platforms can reduce destructive conflict by downplaying the importance of some metrics that promote engagement.
That could mean that for certain hot button topics, platforms wouldn’t automatically boost posts that performed well on typical engagement metrics, such as the number of comments, shares, and time spent on a post. After all, those stats don’t necessarily mean that the user liked the post. Users may engage with the content because they found it offensive or simply to attract attention.
Instead, the researchers propose that social media algorithms could boost posts that users explicitly stated they found valuable.
Facebook has quietly pursued this approach to political content. In April, the company said it was “continuing to move away from ranking based on engagement” and instead give users more weight to “learning what’s informative, worthwhile or meaningful.”
Facebook’s change could be a reason that Mr. Knowles received far fewer views on Facebook than on Twitter, which has not adopted this algorithmic change.
But in the end, algorithms will not save us. They can degrade anxiety speech, but not erase it. We, the users of the platforms, also have a role to play in challenging fear-mongering through contradiction, where leaders and bystanders react negatively to fear-based incitement. The purpose of contradiction is not necessarily to change the minds of true believers, but rather to provide a counter-narrative to those watching from the sidelines.
Humor can be a good defense: In Kenya, a popular TV show, “Vioja Mahakamani”, tested an inoculation strategy in 2012 by having its characters mock community leaders’ attempts to stereotype different groups. Viewers polled in focus groups were more skeptical of politicians promoting fear to incite violence.
Fighting fear will not be easy. But it is perhaps the most important work we can do to prevent online outrage from turning into real violence.