The Times Real Estate


.

  • Written by Dietram A. Scheufele, Professor of Life Sciences Communication, University of Wisconsin-Madison
Experts alone can't handle AI – social scientists explain why the public needs a seat at the table

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies[1] of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race[2] between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates[3] about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already emerging at a rate that overwhelms modern democracies’ ability to collectively work through those problems.

Broad public engagement, or the lack of it, has been a long-running challenge in assimilating emerging technologies, and is key to tackling the challenges they bring.

Ready or not, unintended consequences

Striking a balance between the awe-inspiring possibilities of emerging technologies like AI and the need for societies to think through both intended and unintended outcomes is not a new challenge. Almost 50 years ago, scientists and policymakers met in Pacific Grove, California, for what is often referred to as the Asilomar Conference[4] to decide the future of recombinant DNA research, or transplanting genes from one organism into another. Public participation and input into their deliberations was minimal.

Societies are severely limited in their ability to anticipate and mitigate unintended consequences of rapidly emerging technologies like AI without good-faith engagement from broad cross-sections of public and expert stakeholders. And there are real downsides to limited participation. If Asilomar had sought such wide-ranging input 50 years ago, it is likely that the issues of cost and access would have shared the agenda with the science and the ethics of deploying the technology. If that had happened, the lack of affordability[5] of recent CRISPR-based sickle cell[6] treatments, for example, might’ve been avoided.

AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be obvious to elites like tech leaders and policymakers. If societies fail to ask “the right questions, the ones people care about,” science and technology studies scholar Sheila Jasanoff[7] said in a 2021 interview[8], “then no matter what the science says, you wouldn’t be producing the right answers or options for society.”

Ethical debates should be central to efforts to regulate AI.

Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public[9] and political aspects[10] of emerging science[11]. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers[12] who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research[13]. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums[14] on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

Meanwhile, there is a hunger among the public for helping to shape our collective future. Only about a quarter of U.S. adults in our 2020 AI survey agreed that scientists should be able “to conduct their research without consulting the public” (27.8%). Two-thirds (64.6%) felt that “the public should have a say in how we apply scientific research and technology in society.”

The public’s desire for participation goes hand in hand with a widespread lack of trust in government and industry when it comes to shaping the development of AI. In a 2020 national survey[15] by our team, fewer than one in 10 Americans indicated that they “mostly” or “very much” trusted Congress (8.5%) or Facebook (9.5%) to keep society’s best interest in mind in the development of AI.

Algorithmic bias is just one concern about artificial intelligence.

A healthy dose of skepticism?

The public’s deep mistrust of key regulatory and industry players is not entirely unwarranted. Industry leaders have had a hard time disentangling their commercial interests[16] from efforts to develop an effective regulatory system for AI. This has led to a fundamentally messy policy environment.

Tech firms helping regulators think through the potential and complexities of technologies like AI is not always troublesome, especially if they are transparent about potential conflicts of interest. However, tech leaders’ input on technical questions about what AI can or might be used for is only a small piece of the regulatory puzzle.

Much more urgently, societies need to figure out what types of applications AI should be used for, and how. Answers to those questions can only emerge from public debates that engage a broad set of stakeholders[17] about values, ethics and fairness. Meanwhile, the public is growing concerned[18] about the use of AI.

AI might not wipe out humanity anytime soon, but it is likely to increasingly disrupt life as we currently know it. Societies have a finite window of opportunity to find ways to engage in good-faith debates and collaboratively work toward meaningful AI regulation to make sure that these challenges do not overwhelm them.

References

  1. ^ algorithmically assigns limited supplies (www.who.int)
  2. ^ AI fuels an arms race (www.axios.com)
  3. ^ struggle to hold nuanced debates (doi.org)
  4. ^ Asilomar Conference (doi.org)
  5. ^ lack of affordability (www.statnews.com)
  6. ^ CRISPR-based sickle cell (www.npr.org)
  7. ^ Sheila Jasanoff (www.semanticscholar.org)
  8. ^ said in a 2021 interview (doi.org)
  9. ^ study the public (scholar.google.com)
  10. ^ and political aspects (scholar.google.com)
  11. ^ of emerging science (scholar.google.com)
  12. ^ interviewed almost 2,200 researchers (scimep.wisc.edu)
  13. ^ principles for future AI research (gizmodo.com)
  14. ^ AI Insight Forums (www.washingtonpost.com)
  15. ^ 2020 national survey (scimep.wisc.edu)
  16. ^ disentangling their commercial interests (www.cnbc.com)
  17. ^ engage a broad set of stakeholders (doi.org)
  18. ^ growing concerned (www.pewresearch.org)

Authors: Dietram A. Scheufele, Professor of Life Sciences Communication, University of Wisconsin-Madison

Read more https://theconversation.com/experts-alone-cant-handle-ai-social-scientists-explain-why-the-public-needs-a-seat-at-the-table-210848

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more