What alchemy and astrology can teach artificial intelligence researchers
- Written by Ben Shneiderman, Professor of Computer Science, University of Maryland
Artificial intelligence researchers and engineers have spent a lot of effort trying to build machines that look like humans and operate largely independently. Those tempting dreams have distracted many of them from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities[1]. To accelerate the shift to new ways of thinking, AI designers and developers could take some lessons from the missteps of past researchers.
For example, alchemists, like Isaac Newton[2], pursued ambitious goals[3] such as converting lead to gold, creating a panacea to cure all diseases, and finding potions for immortality[4]. While these goals are alluring, the charlatans pursuing them may have secured princely financial backing[5] that would have been better used developing modern chemistry.
Equally optimistically, astrologers believed they could understand human personality based on birthdates and predict future events by studying the positions of the stars and planets. These promises over the past thousand years often received kingly endorsement[6], possibly slowing the work of those who were adopting scientific methods that eventually led to astronomy.
Giovanni Francesco Barbieri via Daderot/Wikimedia Commons[7]As alchemy and astrology evolved, the participants became more deliberate and organized[8] – what might now be called more scientific – about their studies. That shift eventually led to important findings in chemistry, such as those by Lavoisier[9] and Priestley[10] in the 18th century. In astronomy, Kepler[11] and Newton[12] himself made significant findings in the 17th and 18th centuries. A similar turning point is coming for artificial intelligence. Bold innovators are putting aside tempting but impractical dreams of anthropomorphic designs and excessive autonomy. They focus on systems that restore, rely on, and expand human control and responsibility.
Updating early AI dreams
Back in the 1950s, artificial intelligence researchers pursued big goals, such as human-level computational intelligence and machine consciousness. Even during the past 20 years some researchers worked toward the “singularity[13]” fantasy of machines that are superior to humans in every way. These dreams succeeded in attracting attention from sympathetic journalists and financial backing from government and industry[14]. But to me, those aspirations still seem like counterproductive wishful thinking and B-level science fiction.
Even the dream of creating a human-shaped robot[15] that acted like a person has lasted for more than 50 years. Honda’s near-life-size Asimo[16] and the web-based news reader Ananova[17] got a lot of media attention[18]. Hanson Robotics’ Sophia[19] even received Saudi Arabian citizenship[20]. But they have little commercial future.
By contrast, down-to-earth user-centered designs for information search, e-commerce sites, social media and smartphone apps have been wild successes[21]. There is good reason that Amazon, Apple, Facebook, Google and Microsoft are some of the world’s biggest companies – they all use more functional, if less glamorous, types of AI.
Today’s cellphones feature speech recognition, face recognition and automated translation, which all use artificial intelligence technologies[22]. These functions increase human control and give users more options, without the deception and theatrics of a humanoid robot.
Yielding control
Efforts that pursue advanced forms of computer autonomy are also dangerous. When developers assume their machines will function correctly, they often shortchange interfaces that would allow human users to quickly take control when something goes wrong.
AP Photo/Tatan Syuflana[23]These problems can be deadly. In the October 2018 crash of Lion Air’s Boeing 737 Max[24], a sensor failure caused the newly designed automatic pilot to steer the plane downwards. The pilots couldn’t figure out how to override those automatic controls[25] to keep the plane in the air. Similar problems have been factors in stock market “flash crashes,” like the 2010 event in which US$1 trillion disappeared in 36 minutes[26]. And poorly designed medical devices have delivered deadly doses of medications[27].
The National Transportation Safety Board report on the deadly May 2016 Tesla crash[28] called for automated systems to keep detailed records that would allow investigators to analyze failures. Those insights would lead to safer and more effective designs.
Getting to human-centered solutions
Successful automation is all around: Navigation applications give drivers control by showing times for alternative routes. E-commerce websites show shoppers options, customer reviews and clear pricing so they can find and order the goods they need. Elevators, clothes-washing machines and airline check-in kiosks, too, have meaningful controls that enable users to get what they need done quickly and reliably. When modern cameras assist photographers in taking properly focused and exposed photos, users have a sense of mastery and accomplishment for composing the image, even as they get assistance with optimizing technical details.
Without being human-like or fully independent, these and thousands of other applications enable users to accomplish their tasks with self-confidence and sometimes even pride.
A new report from a leading engineering industry professional group urges technologists to ignore tempting fantasies[29]. Rather, the report suggests, developers should focus on technologies that support human performance and are more immediately useful.
Reuters/George Frey[30]In a flourishing automation-enhanced world, clear, convenient interfaces could let humans control automation to make the most of people’s initiative, creativity and responsibility. The most successful machines could be powerful tools that let users carry out ever-richer tasks with confidence, such as helping architects find innovative ways to design energy-efficient buildings, and giving journalists tools to dig deeper into data to detect fraud and corruption. Other machines could detect – not contribute to – problems like unsafe medical conditions and bias in mortgage loan approvals. Perhaps they could even advise the people responsible on ways to fix things.
Humans are accomplished at building tools that expand their creativity – and then at using those tools in even more innovative ways than their designers intended. In my view, it’s time to let more people be more creative more of the time, by shifting away from the alchemy and astrology phase of AI research.
Technology designers who appreciate and amplify the key aspects of humanity are most likely to invent the next generation of powerful tools. These designers will shift from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.
References
- ^ enhance – rather than replace – human capabilities (standards.ieee.org)
- ^ Isaac Newton (www.biography.com)
- ^ pursued ambitious goals (www.britannica.com)
- ^ finding potions for immortality (press.uchicago.edu)
- ^ secured princely financial backing (press.uchicago.edu)
- ^ often received kingly endorsement (www.upress.pitt.edu)
- ^ Giovanni Francesco Barbieri via Daderot/Wikimedia Commons (commons.wikimedia.org)
- ^ more deliberate and organized (www.smithsonianmag.com)
- ^ Lavoisier (www.sciencehistory.org)
- ^ Priestley (www.sciencehistory.org)
- ^ Kepler (www.britannica.com)
- ^ Newton (www.newton.ac.uk)
- ^ singularity (singularity.com)
- ^ financial backing from government and industry (www.bloomberg.com)
- ^ human-shaped robot (www.therobotreport.com)
- ^ Honda’s near-life-size Asimo (www.theverge.com)
- ^ web-based news reader Ananova (en.wikipedia.org)
- ^ media attention (nypost.com)
- ^ Hanson Robotics’ Sophia (www.wired.co.uk)
- ^ Saudi Arabian citizenship (qz.com)
- ^ have been wild successes (www.pearson.com)
- ^ use artificial intelligence technologies (www.nytimes.com)
- ^ AP Photo/Tatan Syuflana (www.apimages.com)
- ^ October 2018 crash of Lion Air’s Boeing 737 Max (www.nytimes.com)
- ^ override those automatic controls (www.nytimes.com)
- ^ US$1 trillion disappeared in 36 minutes (en.wikipedia.org)
- ^ deadly doses of medications (psnet.ahrq.gov)
- ^ National Transportation Safety Board report on the deadly May 2016 Tesla crash (www.ntsb.gov)
- ^ ignore tempting fantasies (standards.ieee.org)
- ^ Reuters/George Frey (pictures.reuters.com)
Authors: Ben Shneiderman, Professor of Computer Science, University of Maryland