.

  • Written by Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University

Some of the best known examples of artificial intelligence are Siri and Alexa, which listen to human speech, recognize words, perform searches and translate the text results back into speech. But these and other AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions. As Congress considers whether to make laws governing how AI systems function in society, a congressional committee has highlighted concerns around the types of AI algorithms[1] that perform specific – if complex – tasks.

Often called “narrow AI[2],” these devices’ capabilities are distinct from the still-hypothetical general AI machines, whose behavior would be virtually indistinguishable from human activity[3] – more like the “Star Wars” robots[4] R2-D2, BB-8 and C-3PO. Other examples of narrow AI include AlphaGo[5], a computer program that recently beat a human[6] at the game of Go, and a medical device called OsteoDetect[7], which uses AI to help doctors identify wrist fractures.

As a teacher and adviser of students researching the regulation of emerging technologies[8], I view the congressional report as a positive sign of how U.S. policymakers are approaching the unique challenges posed by AI technologies. Before attempting to craft regulations, officials and the public alike need to better understand AI’s effects on individuals and society in general.

Concerns raised by AI technology

Based on information gathered in a series of hearings[9] on AI held throughout 2018, the report highlights the fact that the U.S. is not a world leader[10] in AI development. This has happened as part of a broader trend. Funding for scientific research[11] has decreased since the early 2000s. In contrast, countries like China[12] and Russia[13] have boosted their spending on developing AI technologies.

Congress takes first steps toward regulating artificial intelligence Drones can monitor activities in public and on private land. AP Photo/Keith Srakocic[14]

As illustrated by the recent concerns surrounding Russia’s interference[15] in U.S. and European[16] elections, the development of ever more complex technologies raises concerns about the security and privacy of U.S. citizens. AI systems can now be used to access personal information, make surveillance systems[17] more efficient and fly drones[18]. Overall, this gives companies and governments new and more comprehensive tools to monitor and potentially spy on users.

Even though AI development is in its early stages, algorithms can already be easily used to mislead readers, social media users or even the public in general. For instance, algorithms have been programmed to target specific messages to receptive audiences[19] or generate deepfakes[20], videos that can appear to present a person, even a politician, saying[21] or doing something they never actually did.

Of course, like many other technologies, the same AI program can be used for both beneficial and malicious purposes. For instance, LipNet[22], an AI lip-reading program created at the University of Oxford, has a 93.4 percent[23] accuracy rate. That’s far beyond the best human lip-readers, who have an accuracy rate between 20 and 60 percent[24]. This is great news for people with hearing and speech impairments. At the same time, the program could also be used for broad surveillance purposes, or even to monitor specific individuals.

AI technology can be biased, just like humans

Some uses for AI may be less obvious, even to the people using the technology. Lately, people have become aware of biases[25] in the data that powers AI programs. This has the potential to clash with generalized perceptions that a computer will impartially use data to make objective decisions. In reality, human-built algorithms[26] will use imperfect data[27] to make decisions that reflect human bias[28]. Most crucially, the computer decision may be presented as, or even believed to be, fairer than a decision made by a human – when in fact the opposite may be true[29].

For instance, some courts use a program called COMPAS to decide whether to release criminal defendants[30] on bail. However, there is evidence that the program is discriminating against black defendants[31], incorrectly rating them as more likely to commit future crimes than white defendants. Predictive technologies like this are becoming increasingly widespread. Banks use them to determine who gets a loan[32]. Computer analysis of police data[33] purports to predict where criminal activity will occur. In many cases, these programs only reinforce existing bias instead of eliminating it.

What’s next?

As policymakers begin to address the significant potential – for good and ill – of artificial intelligence, they’ll have to be careful to avoid stifling innovation[34]. In my view, the congressional report is taking the right steps in this regard. It calls for more investment in AI and for funding to be available to more agencies, from NASA to the National Institutes of Health. It also cautions legislators against stepping in too soon, creating too many regulatory hurdles for technologies that are still developing.

More importantly, though, I believe people should begin looking beyond the metrics suggesting that AI programs are functional, time-saving and powerful. The public should start broader conversations about how to eliminate or lessen data bias as the technology moves on. If nothing else, adopters of algorithmic technology need to be made aware of the pitfalls of AI. Technologists may be unable to develop algorithms that are fair in measurable ways, but people can become savvier about how they work, what they’re good at – and what they’re not.

References

  1. ^ concerns around the types of AI algorithms (oversight.house.gov)
  2. ^ narrow AI (medium.com)
  3. ^ indistinguishable from human activity (www.ocf.berkeley.edu)
  4. ^ “Star Wars” robots (www.smithsonianmag.com)
  5. ^ AlphaGo (deepmind.com)
  6. ^ beat a human (www.wired.com)
  7. ^ OsteoDetect (www.fda.gov)
  8. ^ emerging technologies (scholarship.law.slu.edu)
  9. ^ hearings (oversight.house.gov)
  10. ^ the U.S. is not a world leader (moderndiplomacy.eu)
  11. ^ Funding for scientific research (www.sciencemag.org)
  12. ^ China (www.sfchronicle.com)
  13. ^ Russia (moderndiplomacy.eu)
  14. ^ AP Photo/Keith Srakocic (www.apimages.com)
  15. ^ interference (www.nytimes.com)
  16. ^ European (www.washingtonpost.com)
  17. ^ surveillance systems (www.theverge.com)
  18. ^ fly drones (dronelife.com)
  19. ^ target specific messages to receptive audiences (www.verdict.co.uk)
  20. ^ deepfakes (bdtechtalks.com)
  21. ^ a person, even a politician, saying (theconversation.com)
  22. ^ LipNet (openreview.net)
  23. ^ 93.4 percent (www.technologyreview.com)
  24. ^ between 20 and 60 percent (qz.com)
  25. ^ biases (www.chathamhouse.org)
  26. ^ human-built algorithms (www.theatlantic.com)
  27. ^ imperfect data (www.technologyreview.com)
  28. ^ reflect human bias (www.research.ibm.com)
  29. ^ the opposite may be true (medium.com)
  30. ^ whether to release criminal defendants (www.washingtonpost.com)
  31. ^ discriminating against black defendants (www.propublica.org)
  32. ^ who gets a loan (www.technologyreview.com)
  33. ^ Computer analysis of police data (theconversation.com)
  34. ^ avoid stifling innovation (hbr.org)

Authors: Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University

Read more http://theconversation.com/congress-takes-first-steps-toward-regulating-artificial-intelligence-104373

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more