The Times Real Estate


.

  • Written by Paul Resnick, Professor of Information, University of Michigan
Unlike in 2016, there was no spike in misinformation this election cycle

A newsy photo of a public figure shows up on your social media feed, with a clickbait-y headline and a provocative comment, all linking to a site with juicy political content. Did you share it?

Somebody did.

It wasn’t a paid ad, or even recommended-for-you content – it was shared by someone you know. The link didn’t take you to InfoWars or Occupy Democrats – you would’ve noticed that. Maybe it went to Western Journal or another unfamiliar domain whose name sounds legit. Did you comment on it or retweet it?

A lot of somebodies did. Often without even reading it[1].

State-sponsored cyberwarriors[2] and deep-pocketed influence campaigns spread plausible misinformation – what I like to call “iffy” content – as a cost-effective way to advance their social or political cause. Others spread misinformation just to earn ad revenue[3].

Meanwhile, the big social media platforms struggle to implement fair editorial practices – disclosures and demotions, blocks and bans – to attenuate the spread of misinformation rather than amplify it.

How well have Facebook and Twitter done? Are they helping iffy content reach large audiences? At the University of Michigan Center for Social Media Responsibility, we have started keeping score, going back to early 2016.

We compute a daily “Iffy Quotient,” the fraction of the 5,000 most popular URLs on each platform that came from a large list of sites that frequently originate misinformation and hoaxes, a list maintained by Media Bias/Fact Check[4]. The Iffy Quotient is a way for the public to track the platforms’ progress – or lack thereof.

Measuring iffy content

We saw a major uptick in the run-up to the 2016 U.S. presidential election. Iffy content approximately doubled from January to November.

Engagement with iffy content fell off precipitously after the election. Questionable content peaked again in February 2017, tracking public dialogue over the presidential transition and early executive orders.

Twitter did a better job than Facebook of not amplifying iffy content going into 2017, then Facebook started to improve. By the middle of 2018, Facebook’s Iffy Quotient was lower than it had been in mid-2016, and most days it was lower than Twitter’s.

Why did things get bad back in 2016? One reason for the uptick is that users are more politically activated during an election cycle. That boosts interest in political news – especially in sensational political news. Supply rises to meet that demand – from legitimate sources but also from both propagandists and opportunists seeking ad revenue.

Assuming that the publishers and disseminators of misinformation are as competent and motivated in 2018 as they were in 2016, we expected the Iffy Quotient to spike in September and October. But it didn’t.

What’s different? We can’t tell for sure. Perhaps the suppliers of such content lost interest. That seems unlikely. Perhaps the American public got more sophisticated and is less prone to click on or share links to iffy sites. Sadly, that also seems unlikely, though it is a nice long-term aspiration.

The most important difference is probably countermeasures taken by the platforms. Twitter executive Colin Crowell wrote on the company blog[5] in 2017, “We’re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics.” Fake accounts can be used to make content look more popular than it really is, leading the platforms to show the content to more people. Weeding out accounts that engage in such behavior reduces the opportunities for such manipulation.

Facebook has also actively tried to reduce manipulation opportunities by removing fake accounts[6] – 583 million of them in the first quarter of 2018. In addition, in December 2016, Facebook announced a partnership with third-party fact-checkers[7], sending them questionable stories and showing lower in the feed those that the fact-checkers labeled as false.

On Jan. 11, Facebook announced that it would reduce the reach of all public external content[8], in favor of native posts from friends and family. On its own, that wouldn’t affect the Iffy Quotient, which is based on whatever public content is most popular. However, that announcement and one the following week[9] also implied other changes that might have affected the Iffy Quotient. One was prioritizing content around which people interacted with friends; it could be that people interact less around content from iffy sites. Another was prioritizing news that the community rates as trustworthy, that people find informative and that is local.

Holding platforms accountable

Media companies already maintain internal suites of metrics, such as monthly page views, clickthrough rates, dwell times and ad revenue. These metrics strongly influence decisions about changes to products and policies. Typically, product managers are rewarded for improving some primary metric, subject to the constraint that there is at most a modest decline in other metrics.

Externally maintained metrics, like our Iffy Quotient, offer two advantages over internal metrics maintained by the platforms. First, they can draw attention to issues that platforms may either not be tracking themselves or not prioritizing as much as the public would like. This form of public accountability focuses attention on the overall performance of platforms rather than on bad outcomes in individual cases; some bad outcomes may be inevitable given the scale on which the platforms operate.

Second, external metrics can create public legitimacy for claims that platforms make about how well they are meeting public responsibilities. Even if Facebook actually reduces the audience share for iffy content, the public may be skeptical if Facebook defines the metric, conducts the measurement without audit and chooses whether to report it.

In the 2016 election season, Twitter and especially Facebook performed poorly, amplifying a lot of misinformation. In the 2018 cycle, Facebook has performed somewhat better, but Twitter needs to up its game.

Facebook, I salute you. For now. But we’ll keep watching, and you can, too[10].

References

  1. ^ without even reading it (www.facebook.com)
  2. ^ State-sponsored cyberwarriors (www.nbcnews.com)
  3. ^ earn ad revenue (www.wired.com)
  4. ^ Media Bias/Fact Check (mediabiasfactcheck.com)
  5. ^ company blog (blog.twitter.com)
  6. ^ removing fake accounts (newsroom.fb.com)
  7. ^ partnership with third-party fact-checkers (newsroom.fb.com)
  8. ^ reduce the reach of all public external content (newsroom.fb.com)
  9. ^ one the following week (newsroom.fb.com)
  10. ^ you can, too (csmr.umich.edu)

Authors: Paul Resnick, Professor of Information, University of Michigan

Read more http://theconversation.com/unlike-in-2016-there-was-no-spike-in-misinformation-this-election-cycle-105946

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more