ChatGPT and other generative AI could foster science denial and misunderstanding – here's how you can be on alert
- Written by Gale Sinatra, Professor of Education and Psychology, University of Southern California
Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.
Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.
ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations[1] from a massive amalgam of available online information.
Although it has the potential for enhancing productivity[2], generative AI has been shown to have some major faults. It can produce misinformation[3]. It can create “hallucinations[4]” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height[5]. Nevertheless, it is already being used to produce articles[6] and website content[7] you may have encountered, or as a tool[8] in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.
As the authors of “Science Denial: Why It Happens and What to Do About It[9],” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.
Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.
How generative AI could promote science denial
Erosion of epistemic trust. All consumers of science information depend on judgments of scientific and medical experts. Epistemic trust[11] is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than it already has[12].
Misleading or just plain wrong. If there are errors or biases in the data on which AI platforms are trained, that can be reflected in the results[13]. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.
Disinformation spread intentionally. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “write about vaccines in the style of disinformation[14],” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from using it for bad things[15].” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.
Fabricated sources. ChatGPT provides responses with no sources at all, or if asked for sources, may present ones it made up[16]. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.
Dated knowledge. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.
Rapid advancement and poor transparency. AI systems continue to become more powerful and learn faster[17], and they may learn more science misinformation along the way. Google recently announced 25 new embedded uses of AI in its services[18]. At this point, insufficient guardrails are in place[19] to assure that generative AI will become a more accurate purveyor of scientific information over time.
10'000 Hours/DigitalVision via Getty Images[20]What can you do?
If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.
Increase your vigilance. AI fact-checking apps may be available soon[21], but for now, users must serve as their own fact-checkers. There are steps we recommend[22]. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.
Improve your fact-checking. A second step is lateral reading[23], a process professional fact-checkers use. Open a new window and search for information about the sources[24], if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.
Evaluate the evidence. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.
If you begin with AI, don’t stop there. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.
Assess plausibility. Judge whether the claim is plausible. Is it likely to be true[25]? If AI makes an implausible (and inaccurate) statement like “1 million deaths were caused by vaccines, not COVID-19[26],” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.
Promote digital literacy in yourself and others. Everyone needs to up their game. Improve your own digital literacy[27], and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on fact-checking online information[28] and recommends teens be trained in social media skills[29] to minimize risks to health and well-being. The News Literacy Project[30] provides helpful tools for improving and supporting digital literacy.
Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.
References
- ^ predicting likely word combinations (www.washingtonpost.com)
- ^ enhancing productivity (hbr.org)
- ^ produce misinformation (www.scientificamerican.com)
- ^ hallucinations (www.nytimes.com)
- ^ failed to consider both width and height (www.nytimes.com)
- ^ produce articles (www.washingtonpost.com)
- ^ website content (www.nytimes.com)
- ^ as a tool (www.nytimes.com)
- ^ Science Denial: Why It Happens and What to Do About It (global.oup.com)
- ^ Cobalt88/iStock via Getty Images Plus (www.gettyimages.com)
- ^ Epistemic trust (doi.org)
- ^ it already has (www.pewresearch.org)
- ^ can be reflected in the results (theconversation.com)
- ^ write about vaccines in the style of disinformation (www.scientificamerican.com)
- ^ using it for bad things (www.nytimes.com)
- ^ ones it made up (economistwritingeveryday.com)
- ^ more powerful and learn faster (www.nytimes.com)
- ^ 25 new embedded uses of AI in its services (www.nytimes.com)
- ^ insufficient guardrails are in place (theconversation.com)
- ^ 10'000 Hours/DigitalVision via Getty Images (www.gettyimages.com)
- ^ AI fact-checking apps may be available soon (www.niemanlab.org)
- ^ There are steps we recommend (www.nsta.org)
- ^ lateral reading (doi.org)
- ^ information about the sources (www.nsta.org)
- ^ Is it likely to be true (doi.org)
- ^ 1 million deaths were caused by vaccines, not COVID-19 (www.usatoday.com)
- ^ Improve your own digital literacy (theconversation.com)
- ^ fact-checking online information (www.apa.org)
- ^ trained in social media skills (www.apa.org)
- ^ The News Literacy Project (newslit.org)
Authors: Gale Sinatra, Professor of Education and Psychology, University of Southern California