The Times Real Estate


.

  • Written by Eric Smalley, Science + Technology Editor
ChatGPT and its AI chatbot cousins ruled 2023: 4 essential reads that puncture the hype

Within four months of ChatGPT’s launch on Nov. 30, 2022, most Americans had heard of the AI chatbot[1]. Hype about – and fear of – the technology was at a fever pitch for much of 2023.

OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and Microsoft’s Copilot are among the chatbots powered by large language models to provide uncannily humanlike conversations. The experience of interacting with one of these chatbots, combined with Silicon Valley spin, can leave the impression that these technical marvels are conscious entities.

But the reality is considerably less magical or glamorous. The Conversation published several articles in 2023 that dispel several key misperceptions about this latest generation of AI chatbots: that they know something about the world, can make decisions, are a replacement for search engines and operate independent of humans.

1. Bodiless know-nothings

Large-language-model-based chatbots seem to know a lot. You can ask them questions, and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do.

But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their “knowledge” of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on.

Arizona State psychology researcher Arthur Glenberg[2] and University of California, San Diego cognitive scientist Cameron Robert Jones[3] explain how people’s knowledge of the world depends as much on their bodies as their brains[4]. “People’s understanding of a term like ‘paper sandwich wrapper,’ for example, includes the wrapper’s appearance, its feel, its weight and, consequently, how we can use it: for wrapping a sandwich,” they explained.

This knowledge means people also intuitively know other ways of making use of a sandwich wrapper, such as an improvised means of covering your head in the rain. Not so with AI chatbots. “People understand how to make use of stuff in ways that are not captured in language-use statistics,” they wrote.

Read more: It takes a body to understand the world – why ChatGPT and other language AIs don't know what they're saying[5]

AI researchers Emily Bender and Casey Fiesler discuss some of ChatGPT’s limitations, including problems of bias.

2. Lack of judgment

ChatGPT and its cousins can also give the impression of having cognitive abilities – like understanding the concept of negation or making rational decisions – thanks to all the human language they’ve ingested. This impression has led cognitive scientists to test these AI chatbots to assess how they compare to humans in various ways.

University of Southern California AI researcher Mayank Kejriwal[6] tested the large language models’ understanding of expected gain, a measure of how well someone understands the stakes in a betting scenario. They found that the models bet randomly[7].

“This is the case even when we give it a trick question like: If you toss a coin and it comes up heads, you win a diamond; if it comes up tails, you lose a car. Which would you take? The correct answer is heads, but the AI models chose tails about half the time,” he wrote.

Read more: Don't bet with ChatGPT – study shows language AIs often make irrational decisions[8]

3. Summaries, not results

While it might not be surprising that AI chatbots aren’t as humanlike as they can seem, they’re not necessarily digital superstars either. For instance, ChatGPT and the like are increasingly used in place of search engines to answer queries. The results are mixed.

University of Washington information scientist Chirag Shah[9] explains that large language models perform well as information summarizers: combining key information from multiple search engine results in a single block of text. But this is a double-edged sword[10]. This is useful for getting the gist of a topic – assuming no “hallucinations” – but it leaves the searcher without any idea of the sources of the information and robs them of the serendipity of coming across unexpected information.

“The problem is that even when these systems are wrong only 10% of the time, you don’t know which 10%,” Shah wrote. “That’s because these systems lack transparency – they don’t reveal what data they are trained on, what sources they have used to come up with answers or how those responses are generated.”

Read more: AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you[11]

A look at the humans shaping AI chatbots behind the curtain.

4. Not 100% artificial

Perhaps the most pernicious misperception about AI chatbots is that because they are built on artificial intelligence technology, they are highly automated. While you might be aware that large language models are trained on text produced by humans, you might not be aware of the thousands of workers – and millions of users – continuously honing the models, teaching them to weed out harmful responses and other unwanted behavior.

Georgia Tech sociologist John P. Nelson[12] pulled back the curtain of the big tech companies to show that they use workers, typically in the Global South, and feedback from users[13] to train the models which responses are good and which are bad.

“There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage,” he wrote.

Read more: ChatGPT and other language AIs are nothing without humans – a sociologist explains how countless hidden people make the magic[14]

This story is a roundup of articles from The Conversation’s archives.

References

  1. ^ most Americans had heard of the AI chatbot (www.pewresearch.org)
  2. ^ Arthur Glenberg (scholar.google.com)
  3. ^ Cameron Robert Jones (scholar.google.com)
  4. ^ depends as much on their bodies as their brains (theconversation.com)
  5. ^ It takes a body to understand the world – why ChatGPT and other language AIs don't know what they're saying (theconversation.com)
  6. ^ Mayank Kejriwal (scholar.google.com)
  7. ^ models bet randomly (theconversation.com)
  8. ^ Don't bet with ChatGPT – study shows language AIs often make irrational decisions (theconversation.com)
  9. ^ Chirag Shah (scholar.google.com)
  10. ^ this is a double-edged sword (theconversation.com)
  11. ^ AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you (theconversation.com)
  12. ^ John P. Nelson (scholar.google.com)
  13. ^ use workers, typically in the Global South, and feedback from users (theconversation.com)
  14. ^ ChatGPT and other language AIs are nothing without humans – a sociologist explains how countless hidden people make the magic (theconversation.com)

Authors: Eric Smalley, Science + Technology Editor

Read more https://theconversation.com/chatgpt-and-its-ai-chatbot-cousins-ruled-2023-4-essential-reads-that-puncture-the-hype-220035

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more