.

  • Written by Adam G. Klein, Assistant Professor of Communication Studies, Pace University

When a U.S. senator asked Facebook CEO Mark Zuckerberg, “Can you define hate speech?[1]” it was arguably the most important question that social networks face: how to identify extremism inside their communities.

Hate crimes in the 21st century follow a familiar pattern in which an online tirade escalates into violent actions. Before opening fire in the Tree of Life synagogue in Pittsburgh, the accused gunman had vented over far-right social network Gab[2] about Honduran migrants traveling toward the U.S. border, and the alleged Jewish conspiracy behind it all. Then he declared, “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in[3].” The pattern of extremists unloading their intolerance[4] online has been a disturbing feature of some recent hate crimes[5]. But most online hate isn’t that flagrant, or as easy to spot.

As I found in my 2017 study on extremism in social networks and political blogs[6], rather than overt bigotry, most online hate looks a lot like fear. It’s not expressed in racial slurs or calls for confrontation, but rather in unfounded allegations of Hispanic invaders pouring into the country, black-on-white crime or Sharia law infiltrating American cities. Hysterical narratives such as these have become the preferred vehicle for today’s extremists – and may be more effective at provoking real-world violence than stereotypical hate speech.

Fear, more than hate, feeds online bigotry and real-world violence Spreading fear on Facebook. Screenshot from Facebook by The Conversation, CC BY-ND[7]

The ease of spreading fear

On Twitter, a popular meme traveling around recently depicts the “Islamic Terrorist Network[8]” spread across a map of the United States, while a Facebook account called “America Under Attack” shares an article with its 17,000 followers about the “Angry Young Men and Gangbangers” marching toward the border[9]. And on Gab[10], countless profiles talk of Jewish plans to sabotage American culture, sovereignty and the president.

While not overtly antagonistic, these notes play well to an audience that has found in social media a place where they can express their intolerance openly, as long as they color within the lines. They can avoid the exposure that traditional hate speech[11] attracts. Whereas the white nationalist gathering in Charlottesville[12] was high-profile and revealing, social networks can be anonymous and discreet, and therefore liberating for the undeclared racist. That presents a stark challenge to platforms like Facebook, Twitter and YouTube.

Fighting hate

Of course this is not just a challenge for social media companies. The public at large is facing the complex question of how to respond to inflammatory and prejudiced narratives that are stoking racial fears and subsequent hostility. However, social networks have the unique capacity to turn down the volume on intolerance if they determine that a user has in fact breached their terms of service. For instance, in April 2018, Facebook removed two pages[13] associated with white nationalist Richard Spencer. A few months later, Twitter suspended several accounts associated with the far-right group The Proud Boys for violating its policy “prohibiting violent extremist groups[14].”

Still, some critics argue that the networks are not moving fast enough. There is mounting pressure[15] for these websites to police the extremism that has flourished in their spaces, or else become policed themselves[16]. A recent Huffpost/YouGov survey revealed that two-thirds of Americans wanted social networks to prevent users from posting “hate speech or racist content[17].”

In response, Facebook has stepped up its anti-extremism efforts, reporting in May that it had removed “2.5 million pieces of hate speech[18],” over a third of which was identified using artificial intelligence, the rest by human monitors or flagged by users. But even as Zuckerberg promised more action[19] in November 2018, the company acknowledged that teaching its technology to identify hate speech is extremely difficult because of all the contexts and nuances[20] that can drastically alter these meanings.

Moreover, public consensus about what actually constitutes hate speech is ambiguous at best. The libertarian Cato Institute found broad disagreement among Americans[21] about the kind of speech that should qualify as hate, or offensive speech, or fair criticism. And so, these discrepancies raise the obvious question: How can an algorithm identify hate speech if we humans can barely define it ourselves?

Fear lights the fuse

The ambiguity of what constitutes hate speech is providing ample cover for modern extremists to infuse cultural anxieties into popular networks. That presents perhaps the clearest danger: Priming people’s racial paranoia can also be extremely powerful at spurring hostility.

The late communication scholar George Gerbner found that, contrary to popular belief, heavy exposure to media violence did not make people more violent. Rather, it made them more fearful of others doing violence to them[22], which often leads to corrosive distrust and cultural resentment. That’s precisely what today’s racists are tapping into, and what social networks must learn to spot.

Why do so many people watch violent TV and never commit a violent act?

The posts that speak of Jewish plots to destroy America, or black-on-white crime, are not directly calling for violence, but they are amplifying prejudiced views[23] that can inflame followers to act[24]. That’s precisely what happened in advance of the deadly assaults at a historic black church in Charleston in 2015, and the Pittsburgh synagogue last month.

For social networks, the challenge is two-fold. They must first decide whether to continue hosting non-violent racists like Richard Spencer, who has called for “peaceful ethnic cleansing,” and remains active on Twitter[25]. Or for that matter, Nation of Islam leader Louis Farrakhan, who recently compared Jews to termites, and continues to post to his Facebook page[26].

When Twitter and Facebook let these profiles remain active, the companies lend the credibility of their online communities to these provocateurs of racism or anti-Semitism. But they also signal that their definitions of hate may be too narrow.

The most dangerous hate speech is apparently no longer broadcast with ethnic slurs or delusional rhetoric about white supremacy. Rather, it’s all over social media, in plain sight, carrying hashtags like #WhiteGenocide, #BlackCrimes, #MigrantInvasion and #AmericaUnderAttack. They create an illusion of imminent threat that radicals thrive on, and to which the violence-inclined among them have responded.

This article has been updated to correct the political characterization of the Cato Institute.

References

  1. ^ Can you define hate speech? (www.usatoday.com)
  2. ^ Gab (www.thedailybeast.com)
  3. ^ I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in (www.nytimes.com)
  4. ^ unloading their intolerance (www.thedailybeast.com)
  5. ^ recent hate crimes (www.cnn.com)
  6. ^ extremism in social networks and political blogs (www.palgrave.com)
  7. ^ CC BY-ND (creativecommons.org)
  8. ^ Islamic Terrorist Network (web.archive.org)
  9. ^ marching toward the border (theconversation.com)
  10. ^ on Gab (www.thewrap.com)
  11. ^ traditional hate speech (www.splcenter.org)
  12. ^ Charlottesville (www.nytimes.com)
  13. ^ Facebook removed two pages (news.vice.com)
  14. ^ prohibiting violent extremist groups (fortune.com)
  15. ^ mounting pressure (www.washingtonpost.com)
  16. ^ become policed themselves (theconversation.com)
  17. ^ hate speech or racist content (www.huffingtonpost.com)
  18. ^ 2.5 million pieces of hate speech (newsroom.fb.com)
  19. ^ Zuckerberg promised more action (www.facebook.com)
  20. ^ contexts and nuances (newsroom.fb.com)
  21. ^ broad disagreement among Americans (www.cato.org)
  22. ^ more fearful of others doing violence to them (web.asc.upenn.edu)
  23. ^ prejudiced views (www.nytimes.com)
  24. ^ inflame followers to act (www.npr.org)
  25. ^ remains active on Twitter (www.newsweek.com)
  26. ^ his Facebook page (dailycaller.com)

Authors: Adam G. Klein, Assistant Professor of Communication Studies, Pace University

Read more http://theconversation.com/fear-more-than-hate-feeds-online-bigotry-and-real-world-violence-106988

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more