Artificial intelligence can now emulate human behaviors – soon it will be dangerously good
- Written by Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University
When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music[1] along with a virtual Johann Sebastian Bach[2] by entering notes into a program that generates Bach-like harmonies to match them.
Run by Google[3], the app drew[4] great[5] praise[6] for being groundbreaking and fun to play with. It also attracted criticism[7], and raised concerns about AI’s dangers.
My study of how emerging technologies affect people’s lives[8] has taught me that the problems go beyond the admittedly large concern about whether algorithms[9] can really create music[10] or art in general. Some complaints seemed small, but really weren’t, like observations that Google’s AI was breaking basic rules[11] of music composition.
In fact, efforts to have computers mimic the behavior of actual people can be confusing and potentially harmful.
Impersonation technologies
Google’s program analyzed the notes in 306 of Bach’s musical works, finding relationships between the melody and the notes that provided the harmony. Because Bach followed strict rules of composition, the program was effectively learning those rules, so it could apply them when users provided their own notes.
The Bach app itself is new, but the underlying technology is not. Algorithms trained to recognize patterns[12] and make probabilistic decisions[13] have existed for a long time. Some of these algorithms are so complex that people don’t always understand[14] how they make decisions or produce a particular outcome.
AI systems are not perfect – many of them rely on data that aren’t representative[15] of the whole population, or that are influenced by human biases[16]. It’s not entirely clear who might be legally responsible[17] when an AI system makes an error or causes a problem.
Now, though, artificial intelligence technologies are getting advanced enough to be able to approximate individuals’ writing or speaking style, and even facial expressions. This isn’t always bad: A fairly simple AI gave Stephen Hawking the ability to communicate[18] more efficiently with others by predicting the words he would use the most.
More complex programs that mimic human voices assist people with disabilities[19] – but can also be used to deceive listeners. For example, the makers of Lyrebird[20], a voice-mimicking program, have released a simulated conversation[21] between Barack Obama, Donald Trump and Hillary Clinton. It may sound real, but that exchange never happened.
From good to bad
In February 2019, nonprofit company OpenAI created a program that generates text that is virtually indistinguishable from text[22] written by people. It can “write” a speech in the style of John F. Kennedy[23], J.R.R. Tolkien in “The Lord of the Rings[24]” or a student writing a school assignment about the U.S. Civil War[25].
The text generated by OpenAI’s software is so believable that the company has chosen not to release[26] the program itself.
Similar technologies can simulate photos and videos. In early 2018, for instance, actor and filmmaker Jordan Peele created a video that appeared to show former U.S. President Barack Obama saying things Obama never actually said[27] to warn the public about the dangers posed by these technologies.
In early 2019, a fake nude photo[28] of U.S. Rep. Alexandria Ocasio-Cortez circulated online. Fabricated videos[29], often called “deepfakes[30],” are expected to be increasingly[31] used[32] in election campaigns.
Members of Congress[33] have started to look into this issue ahead of the 2020 election[34]. The U.S. Defense Department is teaching the public how to spot doctored videos[35] and audio. News organizations like Reuters[36] are beginning to train journalists to spot deepfakes.
But, in my view, an even bigger concern remains: Users might not be able to learn fast enough to distinguish fake content as AI technology becomes more sophisticated. For instance, as the public is beginning to become aware of deepfakes, AI is already being used for even more advanced deceptions. There are now programs that can generate fake faces[37] and fake digital fingerprints[38], effectively creating the information needed to fabricate an entire person – at least in corporate or government records.
Machines keep learning
At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google’s Bach composer made some mistakes[39] an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths[40], a music interval that Bach studiously avoided[41]. The app also broke musical rules[42] of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI’s text-generating program occasionally wrote phrases like “fires happening under water[43]” that made no sense in their contexts.
As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine[44].
Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress[45]. But giving beneficial technologies room to grow[46] comes at no small cost – and the potential for misuse, whether to make inaccurate “Bach-like” music or to deceive millions, is likely to grow in ways people can’t yet anticipate.
References
- ^ web users compose music (www.androidauthority.com)
- ^ virtual Johann Sebastian Bach (www.google.com)
- ^ Run by Google (www.apnews.com)
- ^ drew (www.vox.com)
- ^ great (www.dailymail.co.uk)
- ^ praise (thenextweb.com)
- ^ criticism (lodewijkmuns.nl)
- ^ emerging technologies affect people’s lives (www.slu.edu)
- ^ whether algorithms (theconversation.com)
- ^ create music (www.forbes.com)
- ^ breaking basic rules (slate.com)
- ^ recognize patterns (www.elsevier.com)
- ^ probabilistic decisions (mitpress.mit.edu)
- ^ don’t always understand (www.technologyreview.com)
- ^ data that aren’t representative (www.hup.harvard.edu)
- ^ influenced by human biases (www.forbes.com)
- ^ who might be legally responsible (www.nber.org)
- ^ ability to communicate (theconversation.com)
- ^ assist people with disabilities (www.scientificamerican.com)
- ^ Lyrebird (lyrebird.ai)
- ^ simulated conversation (www.npr.org)
- ^ virtually indistinguishable from text (techcrunch.com)
- ^ John F. Kennedy (openai.com)
- ^ The Lord of the Rings (openai.com)
- ^ a school assignment about the U.S. Civil War (openai.com)
- ^ not to release (www.cnn.com)
- ^ things Obama never actually said (www.buzzfeednews.com)
- ^ fake nude photo (abcnews.go.com)
- ^ Fabricated videos (www.brookings.edu)
- ^ deepfakes (theconversation.com)
- ^ increasingly (www.techrepublic.com)
- ^ used (qz.com)
- ^ Members of Congress (schiff.house.gov)
- ^ ahead of the 2020 election (www.cnn.com)
- ^ how to spot doctored videos (www.cnn.com)
- ^ Reuters (digiday.com)
- ^ fake faces (www.inverse.com)
- ^ fake digital fingerprints (fortune.com)
- ^ made some mistakes (slate.com)
- ^ parallel fifths (en.wikipedia.org)
- ^ studiously avoided (citeseerx.ist.psu.edu)
- ^ broke musical rules (www.imanimosley.com)
- ^ fires happening under water (openai.com)
- ^ democratize the practice of medicine (papers.ssrn.com)
- ^ curb that progress (theconversation.com)
- ^ room to grow (theconversation.com)
Authors: Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University