In the APSA Public Scholarship Program, graduate students in political science produce summaries of new research in the American Political Science Review. This piece, written by Dirck de Kleer, covers the new article by Franziska Pradel, Jan Zilinsky, Spyros Kosmidis, and Yannis Theocharis, “Toxic Speech and Limited Demand for Content Moderation on Social Media.”
In the 1927 Whitney v California case, the US Supreme Court concluded that freedom of speech was not an absolute right. However, Justice Brandeis concurred that “the fitting remedy for evil counsels is good ones.” Nowadays, social media face a similar tug-of-war between protection from harm and freedom of speech: when does freedom of speech on their platforms cross the line? And what should be a fitting remedy if it does?
In a new APSR article, Franziska Pradel, Jan Zilinsky, Spyros Kosmidis, and Yannis Theocharis try to understand the limits of free speech from the perspective of social media users. While social media companies like Facebook and Twitter have established red lines regarding unacceptable content, we know far less about what social media users think. This perspective is crucial; not only are social media users subjected to online toxicity, but they also play a pivotal role in identifying and flagging objectionable content.
The authors, therefore, conducted several experiments to study how people respond to three forms of “toxic speech”: incivility, intolerance, and violent threats. Incivility can consist of a disrespectful tone, rudeness, or inconsiderate language. Intolerance aims to undermine or harm groups of people, for example, because of their race or religion. Violent threats can also be seen as intolerant rhetoric, but explicitly announce the intention of physical harm.
In these experiments, participants were randomly exposed to social media posts. Some of the participants were shown an uncivil post, an intolerant post, or a violent threat. Other participants saw a post without toxic language. All participants were then asked how they believed social media companies should handle the post they had just seen. This could range from doing nothing to suspending the person’s account.
“Given that user support for content moderation is generally low, the study also raises questions about how social media companies should moderate content in the future and how to trade off protection from harm and freedom of speech.” There are two takeaways from these experiments. First, the authors find that incivility, intolerance, and violent threats elicit different user responses. This shows that these are in fact different forms of toxic speech. This matters because these and other conceptualizations of toxic speech are sometimes used interchangeably in the academic literature. Second, the authors show that support for content moderation of uncivil and intolerant content is rather low. Participants who were shown uncivil or intolerant posts were only somewhat more likely to support content moderation than the participants who saw a post without toxic language. Violent threats more clearly crossed a line. Participants who saw threatening posts were about 42 percentage points more likely to support some form of content moderation.
The targets of toxic speech matter too. For example, the authors find that support for content moderation of violent threats is much higher when LGTBQ people are targeted than when billionaires are targeted. They also examined whether support for content moderation differed across partisan lines. However, they found that respondents seemed to be very consistent in their content moderation views, although Democrats are more likely to demand content moderation than Republicans.
The results of the study may raise concerns about the health of public conversations. Given that user support for content moderation is generally low, the study also raises questions about how social media companies should moderate content in the future and how to trade off protection from harm and freedom of speech.
- Dirck de Kleer is a PhD student in Social and Political Science at Bocconi University (Italy), where he studies political behavior and public opinion. His research focuses on understanding how citizens and politicians navigate the boundaries between moderate and extreme political attitudes and behaviors. In other work, he explores the implications of far-right parties in government. He holds an MA from Duke University, where he was a Fulbright Graduate Student (2018-2020).
- Article details: PRADEL, FRANZISKA, JAN ZILINSKY, SPYROS KOSMIDIS, and YANNIS THEOCHARIS. 2024. “Toxic Speech and Limited Demand for Content Moderation on Social Media.” American Political Science Review, 1-15.
- About the APSA Public Scholarship Program.