.

  • Written by Christoffer Heckman, Assistant Professor of Computer Science, University of Colorado Boulder
AI can now read emotions – should it?

In its annual report[1], the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people’s emotions in certain cases. Specifically, the researchers said affect recognition technology[2], also called emotion recognition technology, should not be used in decisions that “impact people’s lives and access to opportunities,” such as hiring decisions or pain assessments, because it is not sufficiently accurate[3] and can lead to biased decisions.

What is this technology, which is already being used and marketed, and why is it raising concerns?

Outgrowth of facial recognition

Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003[4]. Computers have been able to understand emotion even longer[5]. This latest technology relies on the data-centric techniques known as “machine learning,” algorithms that process data to “learn” how to make decisions, to accomplish even more accurate affect recognition.

The challenge of reading emotions

Researchers are always looking to do new things by building on what has been done before. Emotion recognition is enticing because, somehow, we as humans can accomplish this relatively well from even an early age[6], and yet capably replicating that human skill using computer vision is still challenging. While it’s possible to do some pretty remarkable things with images, such as stylize a photo to make it look as if it were drawn by a famous artist[7] and even create photo-realistic faces[8] – not to mention create so-called deepfakes[9] – the ability to infer properties such as human emotions from a real image has always been of interest for researchers.

Recognizing people’s emotions with computers has potential for a number of positive applications, a researcher who now works at Microsoft explains.

Emotions are difficult because they tend to depend on context. For instance, when someone is concentrating on something it might appear that they’re simply thinking[10]. Facial recognition has come a long way[11] using machine learning, but identifying a person’s emotional state based purely on looking at a person’s face is missing key information. Emotions are expressed not only through a person’s expression but also where they are and what they’re doing. These contextual cues are difficult to feed into even modern machine learning algorithms. To address this, there are active efforts to augment artificial intelligence techniques to consider context[12], not just for emotion recognition but all kinds of applications.

Reading employee emotions

The report released by AI Now[13] sheds light on some ways in which AI is being applied to the workforce in order to evaluate worker productivity and even as early as at the interview stage. Analyzing footage from interviews, especially for remote job-seekers, is already underway[14]. If managers can get a sense of their subordinates’ emotions from interview to evaluation, decision-making regarding other employment matters such as raises, promotions or assignments might end up being influenced by that information. But there are many other ways that this technology could be used.

Why the worry

These types of systems almost always have fairness, accountability, transparency and ethical (“FATE”) flaws baked into their pattern-matching[15]. For example, one study found that facial recognition algorithms rated faces of black people as angrier than white faces[16], even when they were smiling.

Many research groups are tackling this problem[17] but it seems clear at this point that the problem can’t be solved exclusively at the technological level. Issues regarding FATE in AI will require a continued and concerted effort on the part of those using the technology to be aware of these issues and to address them. As the AI Now report highlights[18]: “Despite the increase in AI ethics content … ethical principles and statements rarely focus on how AI ethics can be implemented and whether they’re effective.” It notes that such AI ethics statements largely ignore questions of how, where, and who will put such guidelines into operation. In reality, it’s likely that everyone must be aware of the types of biases and weaknesses these systems present, similar to how we must be aware of our own biases and those of others.

The problem with blanket technology bans

Greater accuracy and ease in persistent monitoring bring along other concerns beyond ethics. There are also a host of general technology-related privacy concerns, spanning from the proliferation of cameras that serve as police feeds[19] to potentially making sensitive data anonymous[20].

With these ethical and privacy concerns, a natural reaction might be to call for a ban on these techniques. Certainly, applying AI to job interview results or criminal sentencing procedures[21] seems dangerous if the systems are learning biases or are otherwise unreliable. There are useful applications however, for instance in helping spot warning signs to prevent youth suicide[22] and detecting drunk drivers[23]. That’s one reason why even concerned researchers, regulators and citizens have generally stopped short of calling for blanket bans on AI-related technologies.

Combining AI and human judgment

Ultimately, technology designers and society as a whole need to look carefully at how information from AI systems is injected into decision-making processes. These systems can give incorrect results just like any other form of intelligence. They are also notoriously bad[24] at rating their own confidence, not unlike humans, even in simpler tasks like the ability to recognize objects[25]. There also remain significant technical challenges in reading emotions, notably considering context to infer emotions[26].

If people rely on a system that isn’t accurate in making decisions, the users of that system are worse off. It’s also well-known that humans tend to trust these systems more than other authority figures[27]. In light of this, we as a society need to carefully consider these systems’ fairness, accountability, transparency and ethics both during design and application, always keeping a human as the final decision-maker.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter[28]. ]

References

  1. ^ annual report (ainowinstitute.org)
  2. ^ affect recognition technology (www.washingtonpost.com)
  3. ^ sufficiently accurate (www.bbc.com)
  4. ^ has been around since at least 2003 (doi.org)
  5. ^ even longer (mitpress.mit.edu)
  6. ^ we as humans can accomplish this relatively well from even an early age (www.ncbi.nlm.nih.gov)
  7. ^ stylize a photo to make it look as if it were drawn by a famous artist (junyanz.github.io)
  8. ^ create photo-realistic faces (medium.com)
  9. ^ deepfakes (www.popularmechanics.com)
  10. ^ might appear that they’re simply thinking (www.cl.cam.ac.uk)
  11. ^ Facial recognition has come a long way (paperswithcode.com)
  12. ^ augment artificial intelligence techniques to consider context (www.darpa.mil)
  13. ^ report released by AI Now (ainowinstitute.org)
  14. ^ is already underway (techcrunch.com)
  15. ^ baked into their pattern-matching (phys.org)
  16. ^ black people as angrier than white faces (theconversation.com)
  17. ^ Many research groups are tackling this problem (www.microsoft.com)
  18. ^ As the AI Now report highlights (ainowinstitute.org)
  19. ^ proliferation of cameras that serve as police feeds (www.vox.com)
  20. ^ potentially making sensitive data anonymous (medium.com)
  21. ^ criminal sentencing procedures (theconversation.com)
  22. ^ helping spot warning signs to prevent youth suicide (www.theatlantic.com)
  23. ^ detecting drunk drivers (web.cse.ohio-state.edu)
  24. ^ They are also notoriously bad (www.cv-foundation.org)
  25. ^ the ability to recognize objects (www.doi.org)
  26. ^ considering context to infer emotions (www.pnas.org)
  27. ^ humans tend to trust these systems more than other authority figures (www.oracle.com)
  28. ^ Sign up for The Conversation’s daily newsletter (theconversation.com)

Authors: Christoffer Heckman, Assistant Professor of Computer Science, University of Colorado Boulder

Read more http://theconversation.com/ai-can-now-read-emotions-should-it-128988

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more