.

  • Written by Molly Jackson, Religion and Ethics Editor
AI is exciting – and an ethical minefield: 4 essential reads on the risks and concerns about this technology

If you’re like me, you’ve spent a lot of time over the past few months trying to figure out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s a lot for the less tech-savvy of us to sort out, trying to make sense of the myriad headlines about artificial intelligence swirling about.

But understanding how AI works is just part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we spotlight articles about the deeper questions the AI revolution raises about bias and inequality, the learning process, its impact on jobs[1], and even the artistic process.

1. Ethical debt

When a company rushes software to market, it often accrues “technical debt”: the cost of having to fix bugs after a program is released, instead of ironing them out beforehand.

There are examples of this in AI as companies race ahead to compete with each other. More alarming, though, is “ethical debt[2],” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for example, or when algorithms end up reinforcing biases[3].

Casey Fiesler[4], a technology ethics expert at the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go wrong.

That kind of speculation is an especially useful skill for technologists trying to envision consequences that might not impact them, Fiesler explained, but that could hurt “marginalized groups that are underrepresented” in tech fields. When it comes to ethical debt, she noted, “the people who incur it are rarely the people who pay for it in the end.”

Read more: AI has social consequences, but who pays the price? Tech companies' problem with 'ethical debt'[5]

2. Is anybody there?

AI programs’ abilities can give the impression that they are sentient, but they’re not, explained Nir Eisikovits[6], director of the Applied Ethics Center at the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote.

But saying AI isn’t conscious[7] doesn’t mean it’s harmless.

“To me,” Eisikovits explained, “the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.” Humans easily project human features onto just about anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” according to Eisikovits, who studies AI’s impact on how people understand themselves.

A human hand against a dark background reaches out to touch a hologram-like hand.
People give names to boats and cars – and can get attached to AI, too. Yuichiro Chino/Moment via Getty Images[8]

Considering how many people talk to their pets and cars, it shouldn’t be a surprise that chatbots can come to mean so much to people who engage with them. The next steps, though, are “strong guardrails” to prevent programs from taking advantage of that emotional connection.

Read more: AI isn't close to becoming sentient – the real danger lies in how easily we're prone to anthropomorphize it[9]

3. Putting pen to paper

From the start, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – figure out if an essay was written by a human or a chatbot?

But AI sparks more fundamental questions about writing, according to Naomi Baron[10], an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t just about honesty, but about the ability to think itself[11].

A woman with short hair, a necklace, and a short-sleeve dress smiles guardedly in a black and white photograph. American writer Flannery O'Connor sits with a copy of her novel ‘Wise Blood,’ published in 1952. Apic/Hulton Archive via Getty Images[12]

Baron pointed to novelist Flannery O'Connor’s remark that “I write because I don’t know what I think until I read what I say.” In other words, writing isn’t just a way to put your thoughts on paper; it’s a process to help sort out your thoughts in the first place.

AI text generation can be a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade into a world of more and more AI, it’s key to remember that “crafting written work should be a journey, not just a destination.”

Read more: How ChatGPT robs students of motivation to write and think for themselves[13]

4. The value of art

Generative AI programs don’t just produce text, but also complex images – which have even captured a prize or two[14]. In theory, allowing AI to do nitty-gritty execution might free up human artists’ big-picture creativity.

Not so fast, said Eisikovits and Alec Stubbs[15], who is also a philosopher at the University of Massachusetts Boston. The finished object viewers appreciate is just part of the process we call “art[16].” For creator and appreciator alike, what makes art valuable is “the work of making something real and working through its details”: the struggle to turn ideas into something we can see.

Read more: ChatGPT, DALL-E 2 and the collapse of the creative process[17]

Editor’s note: This story is a roundup of articles from The Conversation’s archives.

Authors: Molly Jackson, Religion and Ethics Editor

Read more https://theconversation.com/ai-is-exciting-and-an-ethical-minefield-4-essential-reads-on-the-risks-and-concerns-about-this-technology-204444

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more