.

  • Written by Daniel Lowd, Associate Professor of Computer and Information Science, University of Oregon
Can Facebook use AI to fight online abuse?

Facebook has released statistics on abusive behavior[1] on its social media network, deleting more than 22 million posts for violating its rules against pornography and hate speech – and deleting or adding warnings about violence to another 3.5 million posts[2]. Many of those were detected by automated systems monitoring users’ activity, in line with CEO Mark Zuckerberg’s statement to Congress that his company would use artificial intelligence to identify social media posts[3] that might violate the company’s policies. As an academic researching AI and adversarial machine learning[4], I can say he was right to acknowledge the significant challenges: “Determining if something is hate speech is very linguistically nuanced[5].”

The task of detecting abusive posts and comments on social media is not entirely technological. Even Facebook’s human moderators have trouble[6] defining hate speech, inconsistently applying the company’s guidelines[7] and even reversing[8] their[9] decisions[10] (especially when they make headlines[11]). Also, abusers adapt to avoid detection – as email spammers sought to evade detection by replacing “Viagra” with “Vi@gra” in their messages.

Even more complication can come if attackers try to use the machine learning system against itself – tainting the data the algorithm learns from[12] to influence its results. For instance, there is a phenomenon called “Google bombing[13],” in which people create websites and construct sequences of web links in an effort to affect the results of Google’s search algorithms. A similar “data poisoning[14]” attack could limit Facebook’s efforts to identify hate speech.

Tricking machine learning

Machine learning[15], a form of artificial intelligence[16], has proven very useful in detecting many kinds of fraud and abuse, including[17] email spam[18], phishing scams[19], credit card fraud[20] and fake product reviews[21]. It works best when there are large amounts of data in which to identify patterns that can reliably separate normal, benign behavior from malicious activity. For example, if people use their email systems to report as spam large numbers of messages that contain the words “urgent,” “investment” and “payment,” then a machine learning algorithm will be more likely to label as spam future messages including those words.

Detecting abusive posts and comments on social media is a similar problem: An algorithm would look for text patterns that are correlated with abusive or nonabusive behavior. This is faster than reading every comment, more flexible than simply performing keyword searches for slurs and more proactive than waiting for complaints. In addition to the text itself, there are often clues from context[22], including the user who posted the content and their other actions. A verified Twitter account with a million followers would likely be treated differently than a newly created account with no followers.

Yet as those algorithms are developed, abusers adapt, changing their patterns of behavior to avoid detection. Since the dawn of letter substitution in email spam, every new medium has spawned its own version: People buy Twitter followers[23], favorable Amazon reviews[24] and Facebook likes[25], all to fool algorithms and other humans into thinking they’re more reputable.

As a result, a big piece of detecting abuse involves creating a stable definition of what is a problem, even as the actual text expressing the abuse changes. This presents an opportunity for artificial intelligence to, effectively, enter an arms race against itself. If an AI system can predict what an attacker might do, it could be adapted to simulate performing that behavior. Another AI system could analyze those actions, learning to detect abusers’ efforts to sneak hate speech past the automated filters. Once both the attacker and defender can be simulated, game theory[26] can identify their best strategies in this competition.

Data poisoning

Abusers don’t just have to change their own behavior – by substituting different characters for letters or using words or symbols in coded ways[27]. They can also change the machine learning system itself.

Because algorithms are trained on data generated by humans, if enough people change their behavior in particular ways, the system will learn a different lesson than its creators intended. In 2016, for instance, Microsoft unveiled “Tay,” a Twitter bot that was supposed to engage in meaningful conversations with other Twitter users. Instead, trolls flooded the bot with hateful and abusive messages[28]. As the bot analyzed that text, it began to reply in kind – and was quickly shut down.

It can be difficult to determine when human-generated data are causing an AI to perform poorly. When possible, the best defense is for humans to add constraints[29] to the system, such as removing language patterns that are considered sexist[30]. Data poisoning can also be detected by measuring accuracy on a separate, curated data set[31]: If a new model performs poorly on trusted data, then that could mean the new training data are bad. Finally, poisoning can be made less effective by removing outliers[32], data points that are very different from the rest of the training data.

Of course, no machine learning system will ever be perfect. Like humans, computers should be used as part of a larger effort to fight abuse. Even email spam, a major success for machine learning, relies on more than just good algorithms: New internet communications standards[33] make it harder for spammers to hide their identities when sending messages. In addition, federal law, such as the 2003 CAN-SPAM Act[34], sets standards for commercial email, including penalties for violations. Similarly, addressing online abuse may require new standards and policies, not just smarter artificial intelligence.

References

  1. ^ released statistics on abusive behavior (newsroom.fb.com)
  2. ^ another 3.5 million posts (www.bbc.com)
  3. ^ use artificial intelligence to identify social media posts (www.washingtonpost.com)
  4. ^ academic researching AI and adversarial machine learning (scholar.google.com)
  5. ^ Determining if something is hate speech is very linguistically nuanced (www.washingtonpost.com)
  6. ^ Facebook’s human moderators have trouble (www.propublica.org)
  7. ^ inconsistently applying the company’s guidelines (www.theguardian.com)
  8. ^ reversing (www.usatoday.com)
  9. ^ their (motherboard.vice.com)
  10. ^ decisions (www.bbc.com)
  11. ^ make headlines (motherboard.vice.com)
  12. ^ tainting the data the algorithm learns from (www.theverge.com)
  13. ^ Google bombing (www.wired.com)
  14. ^ data poisoning (pralab.diee.unica.it)
  15. ^ Machine learning (news.codecademy.com)
  16. ^ a form of artificial intelligence (medium.com)
  17. ^ including (www.ijcai.org)
  18. ^ email spam (www.wired.com)
  19. ^ phishing scams (www.microsoft.com)
  20. ^ credit card fraud (www.fico.com)
  21. ^ fake product reviews (www.inc.com)
  22. ^ clues from context (doi.org)
  23. ^ Twitter followers (www.cjr.org)
  24. ^ Amazon reviews (techcrunch.com)
  25. ^ Facebook likes (www.facebook.com)
  26. ^ game theory (www.jmlr.org)
  27. ^ symbols in coded ways (www.adl.org)
  28. ^ flooded the bot with hateful and abusive messages (www.theverge.com)
  29. ^ humans to add constraints (towardsdatascience.com)
  30. ^ removing language patterns that are considered sexist (www.technologyreview.com)
  31. ^ measuring accuracy on a separate, curated data set (elie.net)
  32. ^ removing outliers (papers.nips.cc)
  33. ^ New internet communications standards (securityintelligence.com)
  34. ^ 2003 CAN-SPAM Act (www.ftc.gov)

Authors: Daniel Lowd, Associate Professor of Computer and Information Science, University of Oregon

Read more http://theconversation.com/can-facebook-use-ai-to-fight-online-abuse-95203

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more