.

  • Written by Tim Juvshik, Visiting Assistant Professor of Philosophy, Clemson University
AI exemplifies the 'free rider' problem – here's why that points to regulation

On March 22, 2023, thousands of researchers and tech leaders – including Elon Musk and Apple co-founder Steve Wozniak – published an open letter[1] calling to slow down the artificial intelligence race. Specifically, the letter recommended that labs pause training for technologies stronger than OpenAI’s GPT-4, the most sophisticated generation[2] of today’s language-generating AI systems, for at least six months.

Sounding the alarm on risks posed by AI[3] is nothing new – academics have issued warnings about the risks of superintelligent machines[4] for decades now. There is still no consensus about the likelihood of creating[5] artificial general intelligence[6], autonomous AI systems that match or exceed humans[7] at most economically valuable tasks. However, it is clear that current AI systems already pose plenty of dangers, from racial bias in facial recognition technology[8] to the increased threat of misinformation and student cheating[9].

While the letter calls for industry and policymakers to cooperate, there is currently no mechanism to enforce such a pause. As a philosopher who studies technology ethics[10], I’ve noticed that AI research exemplifies the “free rider problem[11].” I’d argue that this should guide how societies respond to its risks – and that good intentions won’t be enough.

Riding for free

Free riding is a common consequence of what philosophers call “collective action problems.” These are situations in which, as a group, everyone would benefit from a particular action, but as individuals, each member would benefit from not doing it[12].

Such problems most commonly involve public goods[13]. For example, suppose a city’s inhabitants have a collective interest in funding a subway system, which would require that each of them pay a small amount through taxes or fares. Everyone would benefit, yet it’s in each individual’s best interest to save money and avoid paying their fair share. After all, they’ll still be able to enjoy the subway if most other people pay.

Hence the “free rider” issue: Some individuals won’t contribute their fair share but will still get a “free ride” – literally, in the case of the subway. If every individual failed to pay, though, no one would benefit.

Philosophers tend to argue that it is unethical to “free ride[14],” since free riders fail to reciprocate others’ paying their fair share. Many philosophers also argue that free riders fail in their responsibilities as part of the social contract[15], the collectively agreed-upon cooperative principles that govern a society. In other words, they fail to uphold their duty to be contributing members of society.

Hit pause, or get ahead?

Like the subway, AI is a public good, given its potential to complete tasks far more efficiently than human operators: everything from diagnosing patients[16] by analyzing medical data to taking over high-risk jobs in the military[17] or improving mining safety[18].

But both its benefits and dangers will affect everyone, even people who don’t personally use AI. To reduce AI’s risks[19], everyone has an interest in the industry’s research being conducted carefully, safely and with proper oversight and transparency. For example, misinformation and fake news already pose serious threats to democracies, but AI has the potential to exacerbate the problem[20] by spreading “fake news” faster and more effectively than people can.

A man in a dark green shirt speaks into a TV camera as a hand holds up a cellphone in the foreground.
A phone screen displays a statement from the head of security policy at Meta warning of a fake video of Ukrainian President Volodymyr Zelenskyy. Olivier Douliery/AFP via Getty Images[21]

Even if some tech companies voluntarily halted their experiments, however, other corporations would have a monetary interest in continuing their own AI research, allowing them to get ahead in the AI arms race. What’s more, voluntarily pausing AI experiments would allow other companies to get a free ride by eventually reaping the benefits of safer, more transparent AI development, along with the rest of society.

Sam Altman, CEO of OpenAI, has acknowledged that the company is scared of the risks[22] posed by its chatbot system, ChatGPT. “We’ve got to be careful here,” he said in an interview with ABC News, mentioning the potential for AI to produce misinformation. “I think people should be happy that we are a little bit scared of this.”

In a letter published April 5, 2023, OpenAI said that the company believes powerful AI systems need regulation[23] to ensure thorough safety evaluations and that it would “actively engage with governments on the best form such regulation could take.” Nevertheless, OpenAI is continuing with the gradual rollout[24] of GPT-4, and the rest of the industry is also continuing to develop and train advanced AIs.

Ripe for regulation

Decades of social science research[25] on collective action problems has shown that where trust and goodwill are insufficient to avoid free riders[26], regulation is often the only alternative. Voluntary compliance is the key factor that creates free-rider scenarios – and government action[27] is at times the way to nip it in the bud.

Further, such regulations must be enforceable[28]. After all, would-be subway riders might be unlikely to pay the fare unless there were a threat of punishment.

Take one of the most dramatic free-rider problems in the world today: climate change[29]. As a planet, we all have a high-stakes interest in maintaining a habitable environment. In a system that allows free riders, though, the incentives for any one country to actually follow greener guidelines are slim.

The Paris Agreement[30], which is currently the most encompassing global accord on climate change, is voluntary, and the United Nations has no recourse to enforce it. Even if the European Union and China voluntarily limited their emissions, for example, the United States and India could “free ride” on the reduction of carbon dioxide while continuing to emit.

Three men in suits and one woman in a blue blazer hold raised hands triumphantly. Global leaders celebrate the adoption of the historic global warming pact at the U.N.‘s COP21 climate change conference in 2015. Francois Guillot/AFP via Getty Images[31]

Global challenge

Similarly, the free-rider problem grounds arguments to regulate AI development. In fact, climate change[32] is a particularly close parallel, since neither the risks posed by AI nor greenhouse gas emissions are restricted to a program’s country of origin.

Moreover, the race to develop more advanced AI is an international one. Even if the U.S. introduced federal regulation of AI research and development, China and Japan could ride free and continue their own domestic AI programs[33].

Effective regulation and enforcement of AI would require global collective action and cooperation, just as with climate change. In the U.S., strict enforcement[34] would require federal oversight of research and the ability to impose hefty fines or shut down noncompliant AI experiments to ensure responsible development – whether that be through regulatory oversight boards, whistleblower protections or, in extreme cases, laboratory or research lockdowns and criminal charges.

Without enforcement, though, there will be free riders – and free riders mean the AI threat won’t abate anytime soon.

References

  1. ^ open letter (futureoflife.org)
  2. ^ most sophisticated generation (benlevinstein.substack.com)
  3. ^ risks posed by AI (doi.org)
  4. ^ superintelligent machines (mitpress.mit.edu)
  5. ^ the likelihood of creating (doi.org)
  6. ^ artificial general intelligence (doi.org)
  7. ^ exceed humans (openai.com)
  8. ^ facial recognition technology (sitn.hms.harvard.edu)
  9. ^ student cheating (quillette.com)
  10. ^ a philosopher who studies technology ethics (www.clemson.edu)
  11. ^ free rider problem (plato.stanford.edu)
  12. ^ benefit from not doing it (doi.org)
  13. ^ public goods (www.investopedia.com)
  14. ^ it is unethical to “free ride (doi.org)
  15. ^ the social contract (philarchive.org)
  16. ^ diagnosing patients (doi.org)
  17. ^ high-risk jobs in the military (www.aljazeera.com)
  18. ^ improving mining safety (doi.org)
  19. ^ AI’s risks (www.bloomsbury.com)
  20. ^ exacerbate the problem (www.nytimes.com)
  21. ^ Olivier Douliery/AFP via Getty Images (www.gettyimages.com)
  22. ^ is scared of the risks (abcnews.go.com)
  23. ^ need regulation (openai.com)
  24. ^ gradual rollout (openai.com)
  25. ^ social science research (doi.org)
  26. ^ to avoid free riders (doi.org)
  27. ^ government action (cdn.mises.org)
  28. ^ such regulations must be enforceable (doi.org)
  29. ^ climate change (www.ceu.edu)
  30. ^ The Paris Agreement (unfccc.int)
  31. ^ Francois Guillot/AFP via Getty Images (www.gettyimages.com)
  32. ^ climate change (doi.org)
  33. ^ AI programs (www.investglass.com)
  34. ^ strict enforcement (dx.doi.org)

Authors: Tim Juvshik, Visiting Assistant Professor of Philosophy, Clemson University

Read more https://theconversation.com/ai-exemplifies-the-free-rider-problem-heres-why-that-points-to-regulation-203489

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more