Elon Musk is wrong: research shows content rules on Twitter help preserve free speech from bots and other manipulation
- Written by Filippo Menczer, Professor of Informatics and Computer Science, Indiana University
Elon Musk’s accepted bid[1] to purchase Twitter has triggered a lot of debate about what it means for the future of the social media platform, which plays an important role in determining the news and information many people – especially Americans[2] – are exposed to.
Musk has said he wants to make Twitter an arena for free speech[3]. It’s not clear what that will mean, and his statements have fueled speculation among both supporters and detractors. As a corporation, Twitter can regulate speech on its platform as it chooses. There are bills being considered in the U.S. Congress[4] and by the European Union[5] that address social media regulation, but these are about transparency, accountability, illegal harmful content and protecting users’ rights, rather than regulating speech.
Musk’s calls for free speech on Twitter focus on two allegations: political bias[6] and excessive moderation[7]. As researchers of online misinformation and manipulation[8], my colleagues and I at the Indiana University Observatory on Social Media[9] study the dynamics and impact of Twitter and its abuse. To make sense of Musk’s statements and the possible outcomes of his acquisition, let’s look at what the research shows.
Political bias
Many conservative politicians[10] and pundits[11] have alleged[12] for years[13] that major social media platforms, including Twitter, have a liberal political bias[14] amounting to censorship of conservative opinions[15]. These claims are based on anecdotal evidence. For example, many partisans whose tweets were labeled as misleading and downranked, or whose accounts were suspended for violating the platform’s terms of service, claim that Twitter targeted them because of their political views.
Unfortunately, Twitter and other platforms often inconsistently enforce their policies[16], so it is easy to find examples supporting one conspiracy theory or another. A review by the Center for Business and Human Rights at New York University has found no reliable evidence[17] in support of the claim of anti-conservative bias by social media companies, even labeling the claim itself a form of disinformation.
A more direct evaluation of political bias by Twitter is difficult because of the complex interactions between people and algorithms. People, of course, have political biases. For example, our experiments with political social bots[18] revealed that Republican users are more likely to mistake conservative bots for humans, whereas Democratic users are more likely to mistake conservative human users for bots.
To remove human bias from the equation in our experiments, we deployed a bunch of benign social bots on Twitter. Each of these bots started by following one news source, with some bots following a liberal source and others a conservative one. After that initial friend, all bots were left alone to “drift” in the information ecosystem for a few months. They could gain followers. They acted according to an identical algorithmic behavior. This included following or following back random accounts, tweeting meaningless content and retweeting or copying random posts in their feed.
But this behavior was politically neutral, with no understanding of content seen or posted. We tracked the bots to probe political biases emerging from how Twitter works or how users interact.
Surprisingly, our research provided evidence that Twitter has a conservative, rather than a liberal bias[20]. On average, accounts are drawn toward the conservative side. Liberal accounts were exposed to moderate content, which shifted their experience toward the political center, while the interactions of right-leaning accounts were skewed toward posting conservative content. Accounts that followed conservative news sources also received more politically aligned followers, becoming embedded in denser echo chambers and gaining influence within those partisan communities.
These differences in experiences and actions can be attributed to interactions with users and information mediated by the social media platform. But we could not directly examine the possible bias in Twitter’s news feed algorithm, because the actual ranking of posts in the “home timeline” is not available to outside researchers.
Researchers from Twitter, however, were able to audit the effects of their ranking algorithm on political content, unveiling that the political right enjoys higher amplification[21] compared to the political left. Their experiment showed that in six out of seven countries studied, conservative politicians enjoy higher algorithmic amplification than liberal ones. They also found that algorithmic amplification favors right-leaning news sources in the U.S.
Our research and the research from Twitter show that Musk’s apparent concern about bias[22] on Twitter against conservatives is unfounded.
Referees or censors?
The other allegation that Musk seems to be making is that excessive moderation stifles free speech on Twitter. The concept of a free marketplace of ideas is rooted in John Milton’s centuries-old reasoning that truth prevails in a free and open exchange of ideas. This view is often cited as the basis for arguments against moderation: accurate, relevant, timely information should emerge spontaneously from the interactions among users.
Unfortunately, several aspects of modern social media[23] hinder the free marketplace of ideas. Limited attention[24] and confirmation bias[25] increase vulnerability to misinformation. Engagement-based ranking[26] can amplify noise and manipulation, and the structure of information networks can distort perceptions[27] and be “gerrymandered” to favor one group[28].
As a result, social media users have in past years become victims of manipulation by “astroturf” causes[29], trolling[30] and misinformation[31]. Abuse is facilitated by social bots[32] and coordinated networks[33] that create the appearance of human crowds.
We and other researchers have observed these inauthentic accounts amplifying disinformation[34], influencing elections[35], committing financial fraud[36], infiltrating vulnerable communities[37] and disrupting communication[38]. Musk has tweeted that he wants to defeat spam bots and authenticate humans[39], but these are neither easy nor necessarily effective solutions.
Inauthentic accounts are used for malicious purposes beyond spam[40] and are hard to detect[41], especially when they are operated by people in conjunction with software algorithms. And removing anonymity may harm vulnerable groups[42]. In recent years, Twitter has enacted policies and systems to moderate abuses by aggressively suspending accounts and networks displaying inauthentic coordinated behaviors. A weakening of these moderation policies may make abuse rampant again.
Manipulating Twitter
Despite Twitter’s recent progress, integrity is still a challenge on the platform. Our lab is finding new types of sophisticated manipulation, which we will present at the International AAAI Conference on Web and Social Media[43] in June. Malicious users exploit so-called “follow trains[44]” – groups of people who follow each other on Twitter – to rapidly boost their followers and create large, dense hyperpartisan echo chambers[45] that amplify toxic content from low-credibility and conspiratorial sources.
Another effective malicious technique is to post and then strategically delete content that violates platform terms[46] after it has served its purpose. Even Twitter’s high limit of 2,400 tweets per day can be circumvented through deletions: We identified many accounts that flood the network with tens of thousands of tweets per day.
We also found coordinated networks that engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms. These techniques enable malicious users to inflate content popularity while evading detection.
Musk’s plans for Twitter are unlikely to do anything about these manipulative behaviors.
Content moderation and free speech
Musk’s likely acquisition of Twitter raises concerns that the social media platform could decrease its content moderation. This body of research shows that stronger, not weaker, moderation of the information ecosystem is called for to combat harmful misinformation.
It also shows that weaker moderation policies would ironically hurt free speech: The voices of real users would be drowned out by malicious users who manipulate Twitter through inauthentic accounts, bots and echo chambers.
References
- ^ accepted bid (www.cnn.com)
- ^ especially Americans (www.pewresearch.org)
- ^ make Twitter an arena for free speech (twitter.com)
- ^ U.S. Congress (cdt.org)
- ^ European Union (www.washingtonpost.com)
- ^ political bias (twitter.com)
- ^ excessive moderation (twitter.com)
- ^ researchers of online misinformation and manipulation (scholar.google.com)
- ^ Indiana University Observatory on Social Media (osome.iu.edu)
- ^ conservative politicians (thehill.com)
- ^ pundits (twitter.com)
- ^ alleged (www.nytimes.com)
- ^ for years (www.cjr.org)
- ^ liberal political bias (www.pewresearch.org)
- ^ censorship of conservative opinions (www.washingtonpost.com)
- ^ inconsistently enforce their policies (www.theatlantic.com)
- ^ no reliable evidence (www.stern.nyu.edu)
- ^ our experiments with political social bots (doi.org)
- ^ CC BY-ND (creativecommons.org)
- ^ evidence that Twitter has a conservative, rather than a liberal bias (doi.org)
- ^ the political right enjoys higher amplification (doi.org)
- ^ apparent concern about bias (twitter.com)
- ^ several aspects of modern social media (www.scientificamerican.com)
- ^ Limited attention (dx.doi.org)
- ^ confirmation bias (doi.org)
- ^ Engagement-based ranking (doi.org)
- ^ distort perceptions (doi.org)
- ^ “gerrymandered” to favor one group (doi.org)
- ^ “astroturf” causes (ojs.aaai.org)
- ^ trolling (doi.org)
- ^ misinformation (doi.org)
- ^ social bots (doi.org)
- ^ coordinated networks (ojs.aaai.org)
- ^ amplifying disinformation (doi.org)
- ^ influencing elections (doi.org)
- ^ committing financial fraud (doi.org)
- ^ infiltrating vulnerable communities (doi.org)
- ^ disrupting communication (doi.org)
- ^ defeat spam bots and authenticate humans (twitter.com)
- ^ purposes beyond spam (www.snopes.com)
- ^ hard to detect (cacm.acm.org)
- ^ harm vulnerable groups (theconversation.com)
- ^ International AAAI Conference on Web and Social Media (www.icwsm.org)
- ^ follow trains (www.followchain.org)
- ^ create large, dense hyperpartisan echo chambers (arxiv.org)
- ^ strategically delete content that violates platform terms (arxiv.org)
Authors: Filippo Menczer, Professor of Informatics and Computer Science, Indiana University