The Times Real Estate


.

  • Written by Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School
AI could shore up democracy – here's one way

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology[1], a ruthless force of automation[2] that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?

You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications[3] about[4] the[5] democratic[6] doom[7] that AI may wreak[8] on the U.S. system of government. There are legitimate reasons to be concerned that AI could spread misinformation[9], break public comment processes[10] on regulations, inundate legislators[11] with artificial constituent outreach, help to automate corporate lobbying[12], or even generate laws[13] in a way tailored to benefit narrow interests.

But there are reasons to feel more sanguine as well. Many groups have started demonstrating the potential beneficial[14] uses of AI for governance. A key constructive-use case for AI in democratic processes is to serve as discussion moderator[15] and consensus builder[16].

To help democracy scale better[17] in the face of growing, increasingly interconnected populations – as well as the wide availability of AI language tools that can generate reams of text at the click of a button – the U.S. will need to leverage AI’s capability to rapidly digest, interpret and summarize this content[18].

An old problem

There are two different ways to approach the use of generative AI to improve civic participation and governance. Each is likely to lead to drastically different experience for public policy advocates and other people trying to have their voice heard in a future system where AI chatbots are both the dominant readers and writers of public comment.

For example, consider individual letters to a representative, or comments as part of a regulatory rulemaking process. In both cases, we the people are telling the government what we think and want.

For more than half a century[19], agencies have been using human power to read through all the comments received, and to generate summaries and responses of their major themes. To be sure, digital technology has helped.

black-and-white photo of a man in a business suit holding a letter with a large pile of mail on the wooden desk in front of him
Taking in comments from the public has been a challenge for representatives and their staffs for many decades. AP Photo[20]

In 2021, the Council of Federal Chief Data Officers recommended modernizing[21] the comment review process by implementing natural language processing tools for removing duplicates and clustering similar comments in processes governmentwide. These tools are simplistic by the standards of 2023 AI. They work by assessing the semantic similarity of comments based on metrics like word frequency (How often did you say “personhood”?) and clustering similar comments and giving reviewers a sense of what topic they relate to.

Getting the gist

Think of this approach as collapsing public opinion. They take a big, hairy mass of comments from thousands of people and condense them into a tidy set of essential reading that generally suffices to represent the broad themes of community feedback. This is far easier for a small agency staff or legislative office to handle than it would be for staffers to actually read through that many individual perspectives.

But what’s lost in this collapsing is individuality, personality and relationships. The reviewer of the condensed comments may miss the personal circumstances that led so many commenters to write in with a common point of view, and may overlook the arguments and anecdotes that might be the most persuasive content of the testimony.

Most importantly, the reviewers may miss out on the opportunity to recognize committed and knowledgeable advocates, whether interest groups or individuals, who could have long-term, productive relationships with the agency.

These drawbacks have real ramifications for the potential efficacy of those thousands of individual messages, undermining what all those people were doing it for. Still, practicality tips the balance toward of some kind of summarization approach. A passionate letter of advocacy doesn’t hold any value if regulators or legislators simply don’t have time to read it.

Finding the signals and the noise

There is another approach. In addition to collapsing testimony through summarization, government staff can use modern AI techniques to explode it. They can automatically recover and recognize a distinctive argument from one piece of testimony that does not exist in the thousands of other testimonies received. They can discover the kinds of constituent stories and experiences that legislators love to repeat at hearings, town halls and campaign events. This approach can sustain the potential impact of individual public comment to shape legislation even as the volumes of testimony may rise exponentially.

Representatives often use anecdotes from constituents to humanize issues.

In computing, there is a rich history of that type of automation task in what is called outlier detection[22]. Traditional methods generally involve finding a simple model that explains most of the data in question, like a set of topics that well describe the vast majority of submitted comments. But then they go a step further by isolating those data points that fall outside the mold — comments that don’t use arguments that fit into the neat little clusters.

State-of-the-art AI language models aren’t necessary for identifying outliers in text document data sets, but using them could bring a greater degree of sophistication and flexibility to this procedure. AI language models can be tasked to identify novel perspectives within a large body of text through prompting alone. You simply need to tell the AI to find them[23].

In the absence of that ability to extract distinctive comments, lawmakers and regulators have no choice but to prioritize on other factors. If there is nothing better, “who donated the most to our campaign[24]” or “which company employs the most of my former staffers[25]” become reasonable metrics for prioritizing public comments. AI can help elected representatives do much better.

If Americans want AI to help revitalize the country’s ailing democracy, they need to think about how to align the incentives of elected leaders with those of individuals. Right now, as much as 90% of constituent communications are mass emails[26] organized by advocacy groups, and they go largely ignored by staffers. People are channeling their passions into a vast digital warehouses where algorithms box up their expressions so they don’t have to be read. As a result, the incentive for citizens and advocacy groups is to fill that box up to the brim, so someone will notice it’s overflowing.

A talented, knowledgeable, engaged citizen should be able to articulate their ideas and share their personal experiences and distinctive points of view in a way that they can be both included with everyone else’s comments where they contribute to summarization and recognized individually among the other comments. An effective comment summarization process would extricate those unique points of view from the pile and put them into lawmakers’ hands.

References

  1. ^ dehumanizing technology (nymag.com)
  2. ^ force of automation (www.nber.org)
  3. ^ prognostications (www.washingtonpost.com)
  4. ^ about (www.pbs.org)
  5. ^ the (www.nytimes.com)
  6. ^ democratic (www.scientificamerican.com)
  7. ^ doom (www.bostonglobe.com)
  8. ^ AI may wreak (theconversation.com)
  9. ^ spread misinformation (openai.com)
  10. ^ break public comment processes (regulatorystudies.columbian.gwu.edu)
  11. ^ inundate legislators (www.brookings.edu)
  12. ^ automate corporate lobbying (www.vice.com)
  13. ^ generate laws (www.technologyreview.com)
  14. ^ beneficial (www.nytimes.com)
  15. ^ discussion moderator (www.npr.org)
  16. ^ consensus builder (arxiv.org)
  17. ^ scale better (cyberscoop.com)
  18. ^ this content (www.brookings.edu)
  19. ^ half a century (core.ac.uk)
  20. ^ AP Photo (newsroom.ap.org)
  21. ^ recommended modernizing (resources.data.gov)
  22. ^ outlier detection (scikit-learn.org)
  23. ^ find them (andrewmayneblog.wordpress.com)
  24. ^ who donated the most to our campaign (doi.org)
  25. ^ which company employs the most of my former staffers (doi.org)
  26. ^ mass emails (www.congressfoundation.org)

Authors: Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School

Read more https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more