In the APSA Public Scholarship Program, graduate students in political science produce summaries of new research in the American Political Science Review. This piece, written by Leah Costik, covers the new article by Denis Stukal, HSE University, Sergey Sanovich, Princeton University, Richard Bonneau and Joshua A. Tucker, New York University, “Why Botter: How Pro-Government Bots Fight Opposition in Russia“.
As the Russian invasion of Ukraine continues to unfold, social media platforms are clamping down on Russian state-owned media, a key lever the Kremlin uses to spread propaganda and disinformation. The survival of non-democratic regimes in part depends on their ability to manage the information environment in this way. Social media has become a key ingredient in autocrats’ toolkits of how to respond to online opposition, which includes the use of trolls and automated bot accounts. But what are bots? What work do they do? And how might this social media tool be used in authoritarian regimes to help such regimes? In their new article, authors Stukal, Sanovich, Bonneau, and Tucker explore these pressing questions through an investigation of the use of pro-government Twitter bots within Russia during times of both offline and online political protests.
While current research explores the use of human trolls by authoritarian regimes, much less work exists that examines bots, or algorithmically controlled social media accounts. Stukal et al. argue that bots offer a number of benefits over other “digital information manipulation tools.” Bots are inexpensive, difficult to trace, can be deployed in large numbers, do not require human intervention, and can run online for indefinite periods of time. The authors focus specifically on Twitter bots, algorithmically controlled accounts that can automatically perform many actions like that of a normal (human) user, including posting, retweeting, responding, and liking posts, all without the intervention of a human.
Authoritarian regimes can use Twitter bots for a variety of reasons: (1) bots may be used to show support for controversial governmental programs or candidates hoping for reelection; (2) regional governors are encouraged by the Kremlin to use social media, but public employees “often lack the necessary skills for effective social media communication and rely on bots to artificially inflate relevant activity indicators”; and (3) non-government actors, such as businessmen, may also use bots to signal support for politicians in hopes of getting perks or pay offs. In their article, Stukal et al. remain agnostic about the reasons people may use Twitter bots and assume only that both government agencies and non-governmental actors alike use Twitter bots to maximize the benefits they offer.
The authors theorize that in a competitive authoritarian environment, Twitter bots could be used in an attempt to alter the cost-benefit analysis of participating in opposition movements, either online or offline. The authors use two theoretical frameworks. First, they theorize that Twitter bots could be used “to reduce participation in offline protests.” Second, they theorize that Twitter bots could be used to “control the online agenda… and will be mobilized in response to opposition online activity.” Twitter bots may use the same tactics to achieve these different goals. From these theoretical frameworks, the authors derive four strategies Twitter bots might use.
“Stukal et al. demonstrate that some previous research on human trolls does not carry over for bots, especially Twitter bots. Most critically, the authors contribute to and advance research on the tools non-democratic regimes have at their disposal to undermine opposition.” The first strategy available to Twitter bots includes de-emphasizing a protest-related agenda by increasing the frequency with which they post content (“volume amplification”). Similarly, the second strategy is to distract social media users by increasing their retweeting of diverse accounts (“retweet diversity”). The third strategy involves decreasing the opposition supporters’ expected benefits by tweeting pro-government posts about Vladimir Putin. The logic behind this “cheerleading” is to make Putin appear more popular, which may make potential protestors think the likelihood of their protest succeeding is lower. A fourth and final strategy available to Twitter bots involves “increasing the expected costs of supporting opposition” through trolling and harassment. This strategy includes “negative campaigning,” measured by the number of tweets pro-government Twitter bots’ produce that mention Alexey Navalny, a charismatic and prominent Russian opposition leader.
The authors use machine learning to detect bots on Russian political Twitter and find 1,516 pro-government Twitter bots with over one million tweets. The authors then identify both offline protests and online opposition activity. Offline protests were identified in a three-step process including use of the Integrated Early Crisis Warning System, a project that “automatically extracts information from news articles” to generate a list of offline protests, a manual search for mentions of protests in both English and Russian-language mass media, cross-checking their data against three other protest datasets. Stukal et al. identify online opposition activity through spikes, or “a day with at least five times as many tweets from opposition accounts as they posted on a median day a month before and after that day,” within the tweets of 15 activists, independent journalists, or mass media outlets that report favorably and extensively on Russian opposition.
To measure the effect that spikes in opposition online activity and offline protests have on Twitter bot strategies, Stukal et al. use various statistical analyses. They found that while their hypotheses regarding the negative campaigning strategy of Twitter bots was rejected and mixed results were found regarding “cheerleading,” their hypotheses regarding the volume and retweet diversity dimensions were confirmed. In other words, Twitter bots do increase their activity, as well as retweet a lot on diverse topics, in an attempt to deemphasize a protest-related agenda. Intriguingly, the authors find that bots are used more often in response to online as opposed to offline protests.
Stukal, Sanovich, Bonneau, and Tucker’s research offers several valuable contributions in answering questions related to the use of social media in competitive authoritarian contexts. First, they bridge the gap between diverse bodies of scholarship, including computer science research on bot detection and political science research on authoritarian politics. Second, they develop testable hypotheses about the ways in which Twitter bots may be employed to “counter domestic opposition activity either online or offline.” Third, Stukal et al. demonstrate that some previous research on human trolls does not carry over for bots, especially Twitter bots. Most critically, the authors contribute to and advance research on the tools non-democratic regimes have at their disposal to undermine opposition.
- Leah Costik is a third-year Ph.D. student in the Department of Political Science with a minor in Global Public Health at the University of Minnesota, Twin Cities. Her research interests include rebel governance, public health, and civil war.
- STUKAL, DENIS, SERGEY SANOVICH, RICHARD BONNEAU, and JOSHUA A. TUCKER. 2022. “Why Botter: How Pro-Government Bots Fight Opposition in Russia” American Political Science Review, 1–15.
- About the APSA Public Scholarship Program.