Visual misinformation is widespread on Facebook – and often undercounted by researchers
- Written by Yunkang Yang, Assistant Professor of Communication, Texas A&M University
How much misinformation is on Facebook? Several studies have found that the amount of misinformation on Facebook is low[1] or that the problem has declined[2] over[3] time[4].
This previous work, though, missed most of the story.
We are a communications researcher[5], a media and public affairs researcher[6] and a founder of a digital intelligence company[7]. We conducted a study that shows that massive amounts of misinformation have been overlooked[8] by other studies. The biggest source of misinformation on Facebook is not links to fake news sites but something more basic: images. And a large portion of posted pictures are misleading.
For instance, on the eve of the 2020 election, nearly one out of every four political image posts on Facebook contained misinformation. Widely shared falsehoods included QAnon conspiracy theories, misleading statements about the Black Lives Matter movement and unfounded claims about Joe Biden’s son Hunter Biden.
Visual misinformation by the numbers
Our study is the first large-scale effort, on any social media platform, to measure the prevalence of image-based misinformation about U.S. politics. Image posts are important to study, in part because they are the most common type of post on Facebook at roughly 40% of all posts[9].
Previous research suggests that images may be especially potent. Adding images to news stories can shift attitudes[10], and posts with images are more likely to be reshared[11]. Images have also been a longtime component of state-sponsored disinformation campaigns[12], like those of Russia’s Internet Research Agency.
We went big, collecting more than 13 million Facebook image posts from August through October 2020, from 25,000 pages and public groups. Audiences on Facebook are so concentrated that these pages and groups account for at least 94% of all engagement – likes, shares, reactions – for political image posts. We used facial recognition to identify public figures, and we tracked reposted images. We then classified large, random draws of images in our sample, as well as the most frequently reposted images.
Overall, our findings are grim: 23% of image posts in our data contained misinformation. Consistent with previous work[13], we found that misinformation was unequally distributed along partisan lines. While only 5% of left-leaning posts contained misinformation, 39% of right-leaning posts did.
The misinformation we found on Facebook was highly repetitive and often simple. While there were plenty of images doctored in a misleading way, these were outnumbered by memes with misleading text, screenshots of fake posts from other platforms, or posts that took unaltered images and misrepresented them.
For example, a picture was repeatedly posted as “proof” that now-former Fox News anchor Chris Wallace was a close associate of sexual predator Jeffrey Epstein. In reality, the gray-haired man in the image is not Epstein but actor George Clooney.
There was one piece of good news. Some previous research[14] had found that misinformation posts generated more engagement than true posts. We did not find that. Controlling for page subscribers and group size, we found no relationship between engagement and the presence of misinformation. Misinformation didn’t guarantee virality – but it also didn’t diminish the chances that a post would go viral.
But image posts on Facebook were toxic in ways that went beyond simple misinformation. We found countless images that were abusive, misogynistic or simply racist. Nancy Pelosi, Hillary Clinton, Maxine Waters, Kamala Harris and Michelle Obama were the most frequent targets of abuse. For example, one frequently reposted image labeled Kamala Harris a “‘high-end’ call girl.” In another, a photo of Michelle Obama was altered to make it appear that she had a penis.
Yawning gap in knowledge
Much more work remains to be done in understanding the role visual misinformation plays in the digital political landscape. While Facebook remains the most used social media platform, more than a billion images a day are posted on Facebook’s sister platform Instagram, and billions more on rival Snapchat. Videos posted on YouTube, or more recent arrival TikTok, may also be an important vector of political misinformation about which researchers still know too little.
Perhaps the most disturbing finding of our study, then, is that it highlights the breadth of collective ignorance about misinformation on social media. Hundreds of studies have been published on the subject, but until now researchers have not understood the biggest source of misinformation on the largest social media platform. What else are we missing?
References
- ^ low (doi.org)
- ^ declined (doi.org)
- ^ over (doi.org)
- ^ time (doi.org)
- ^ communications researcher (scholar.google.com)
- ^ media and public affairs researcher (scholar.google.com)
- ^ founder of a digital intelligence company (towcenter.columbia.edu)
- ^ massive amounts of misinformation have been overlooked (doi.org)
- ^ at roughly 40% of all posts (academic.oup.com)
- ^ shift attitudes (doi.org)
- ^ more likely to be reshared (dx.doi.org)
- ^ state-sponsored disinformation campaigns (www.intelligence.senate.gov)
- ^ previous work (academic.oup.com)
- ^ research (dl.acm.org)
Authors: Yunkang Yang, Assistant Professor of Communication, Texas A&M University