The Times Real Estate


.

  • Written by Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University
Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music[1] along with a virtual Johann Sebastian Bach[2] by entering notes into a program that generates Bach-like harmonies to match them.

Run by Google[3], the app drew[4] great[5] praise[6] for being groundbreaking and fun to play with. It also attracted criticism[7], and raised concerns about AI’s dangers.

My study of how emerging technologies affect people’s lives[8] has taught me that the problems go beyond the admittedly large concern about whether algorithms[9] can really create music[10] or art in general. Some complaints seemed small, but really weren’t, like observations that Google’s AI was breaking basic rules[11] of music composition.

In fact, efforts to have computers mimic the behavior of actual people can be confusing and potentially harmful.

Impersonation technologies

Google’s program analyzed the notes in 306 of Bach’s musical works, finding relationships between the melody and the notes that provided the harmony. Because Bach followed strict rules of composition, the program was effectively learning those rules, so it could apply them when users provided their own notes.

The Google Doodle team explains the Bach program.

The Bach app itself is new, but the underlying technology is not. Algorithms trained to recognize patterns[12] and make probabilistic decisions[13] have existed for a long time. Some of these algorithms are so complex that people don’t always understand[14] how they make decisions or produce a particular outcome.

AI systems are not perfect – many of them rely on data that aren’t representative[15] of the whole population, or that are influenced by human biases[16]. It’s not entirely clear who might be legally responsible[17] when an AI system makes an error or causes a problem.

Now, though, artificial intelligence technologies are getting advanced enough to be able to approximate individuals’ writing or speaking style, and even facial expressions. This isn’t always bad: A fairly simple AI gave Stephen Hawking the ability to communicate[18] more efficiently with others by predicting the words he would use the most.

More complex programs that mimic human voices assist people with disabilities[19] – but can also be used to deceive listeners. For example, the makers of Lyrebird[20], a voice-mimicking program, have released a simulated conversation[21] between Barack Obama, Donald Trump and Hillary Clinton. It may sound real, but that exchange never happened.

From good to bad

In February 2019, nonprofit company OpenAI created a program that generates text that is virtually indistinguishable from text[22] written by people. It can “write” a speech in the style of John F. Kennedy[23], J.R.R. Tolkien in “The Lord of the Rings[24]” or a student writing a school assignment about the U.S. Civil War[25].

The text generated by OpenAI’s software is so believable that the company has chosen not to release[26] the program itself.

Similar technologies can simulate photos and videos. In early 2018, for instance, actor and filmmaker Jordan Peele created a video that appeared to show former U.S. President Barack Obama saying things Obama never actually said[27] to warn the public about the dangers posed by these technologies.

Be careful what videos you believe.

In early 2019, a fake nude photo[28] of U.S. Rep. Alexandria Ocasio-Cortez circulated online. Fabricated videos[29], often called “deepfakes[30],” are expected to be increasingly[31] used[32] in election campaigns.

Members of Congress[33] have started to look into this issue ahead of the 2020 election[34]. The U.S. Defense Department is teaching the public how to spot doctored videos[35] and audio. News organizations like Reuters[36] are beginning to train journalists to spot deepfakes.

But, in my view, an even bigger concern remains: Users might not be able to learn fast enough to distinguish fake content as AI technology becomes more sophisticated. For instance, as the public is beginning to become aware of deepfakes, AI is already being used for even more advanced deceptions. There are now programs that can generate fake faces[37] and fake digital fingerprints[38], effectively creating the information needed to fabricate an entire person – at least in corporate or government records.

Machines keep learning

At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google’s Bach composer made some mistakes[39] an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths[40], a music interval that Bach studiously avoided[41]. The app also broke musical rules[42] of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI’s text-generating program occasionally wrote phrases like “fires happening under water[43]” that made no sense in their contexts.

As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine[44].

Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress[45]. But giving beneficial technologies room to grow[46] comes at no small cost – and the potential for misuse, whether to make inaccurate “Bach-like” music or to deceive millions, is likely to grow in ways people can’t yet anticipate.

References

  1. ^ web users compose music (www.androidauthority.com)
  2. ^ virtual Johann Sebastian Bach (www.google.com)
  3. ^ Run by Google (www.apnews.com)
  4. ^ drew (www.vox.com)
  5. ^ great (www.dailymail.co.uk)
  6. ^ praise (thenextweb.com)
  7. ^ criticism (lodewijkmuns.nl)
  8. ^ emerging technologies affect people’s lives (www.slu.edu)
  9. ^ whether algorithms (theconversation.com)
  10. ^ create music (www.forbes.com)
  11. ^ breaking basic rules (slate.com)
  12. ^ recognize patterns (www.elsevier.com)
  13. ^ probabilistic decisions (mitpress.mit.edu)
  14. ^ don’t always understand (www.technologyreview.com)
  15. ^ data that aren’t representative (www.hup.harvard.edu)
  16. ^ influenced by human biases (www.forbes.com)
  17. ^ who might be legally responsible (www.nber.org)
  18. ^ ability to communicate (theconversation.com)
  19. ^ assist people with disabilities (www.scientificamerican.com)
  20. ^ Lyrebird (lyrebird.ai)
  21. ^ simulated conversation (www.npr.org)
  22. ^ virtually indistinguishable from text (techcrunch.com)
  23. ^ John F. Kennedy (openai.com)
  24. ^ The Lord of the Rings (openai.com)
  25. ^ a school assignment about the U.S. Civil War (openai.com)
  26. ^ not to release (www.cnn.com)
  27. ^ things Obama never actually said (www.buzzfeednews.com)
  28. ^ fake nude photo (abcnews.go.com)
  29. ^ Fabricated videos (www.brookings.edu)
  30. ^ deepfakes (theconversation.com)
  31. ^ increasingly (www.techrepublic.com)
  32. ^ used (qz.com)
  33. ^ Members of Congress (schiff.house.gov)
  34. ^ ahead of the 2020 election (www.cnn.com)
  35. ^ how to spot doctored videos (www.cnn.com)
  36. ^ Reuters (digiday.com)
  37. ^ fake faces (www.inverse.com)
  38. ^ fake digital fingerprints (fortune.com)
  39. ^ made some mistakes (slate.com)
  40. ^ parallel fifths (en.wikipedia.org)
  41. ^ studiously avoided (citeseerx.ist.psu.edu)
  42. ^ broke musical rules (www.imanimosley.com)
  43. ^ fires happening under water (openai.com)
  44. ^ democratize the practice of medicine (papers.ssrn.com)
  45. ^ curb that progress (theconversation.com)
  46. ^ room to grow (theconversation.com)

Authors: Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University

Read more http://theconversation.com/artificial-intelligence-can-now-emulate-human-behaviors-soon-it-will-be-dangerously-good-114136

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more