Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022, Cornell University - arXiv
…
18 pages
1 file
Twitter is one of the most popular online micro-blogging and social networking platforms. This platform allows individuals to freely express opinions and interact with others regardless of geographic barriers. However, with the good that online platforms offer, also comes the bad. Twitter and other social networking platforms have created new spaces for incivility. With the growing interest on the consequences of uncivil behavior online, understanding how a toxic comment impacts online interactions is imperative. We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations and the relationship between toxicity and user engagement. We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations. However, within toxic conversations, toxicity is positively associated with more individual Twitter users participating in conversations. This suggests that overall, more visible conversations are more likely to include toxic replies. Additionally, we examine the sequencing of toxic tweets and its impact on conversations. Toxic tweets often occur as the main tweet or as the first reply, and lead to greater overall conversation toxicity. We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation, such that whether the first reply is toxic or non-toxic sets the stage for the overall toxicity of the conversation, following the idea that hate can beget hate. CCS Concepts: • Information systems → Social networking sites.
2021
Threat moderation on social media has been subject to much public debate and criticism, especially for its broadly permissive approach. In this paper, we focus on Twitter's Violent Threats policy, highlighting its shortcomings by comparing it to linguistic and legal threat assessment frameworks. Specifically, we foreground the importance of accounting for the lived experiences of harassment-how people perceive and react to a tweet-a measure largely disregarded by Twitter's Violent Threats policy but a core part of linguistic and legal threat assessment frameworks. To illustrate this, we examine three tweets by drawing upon these frameworks. These tweets showcase the racist, sexist, and abusive language used in threats towards those who have been marginalized. Through our analysis, we highlight how content moderation policies, despite their stated goal of promoting free speech, in effect, work to inhibit it by fostering an online toxic environment that precipitates self-censorship in fear of violence and retaliation. In doing so, we make a case for technology designers and policy makers working in the sphere of content moderation to craft approaches that incorporate the various nuanced dimensions of threat assessment toward a more inclusive and open environment for online discourse. CONTENT WARNING: This paper contains strong and violent language. Please use discretion when reading, printing, or recommending this paper. CCS CONCEPTS • Human-centered computing → Collaborative and social computing; Collaborative and social computing theory, concepts and paradigms; Social media; • Social and professional topics → Computing/technology policy; Censorship; Hate Speech.
The American political science review, 2024
hen is speech on social media toxic enough to warrant content moderation? Platforms impose limits on what can be posted online, but also rely on users' reports of potentially harmful content. Yet we know little about what users consider inadmissible to public discourse and what measures they wish to see implemented. Building on past work, we conceptualize three variants of toxic speech: incivility, intolerance, and violent threats. We present results from two studies with pre-registered randomized experiments (Study 1, N ¼ 5,130 ; Study 2, N ¼ 3,734) to examine how these variants causally affect users' content moderation preferences. We find that while both the severity of toxicity and the target of the attack matter, the demand for content moderation of toxic speech is limited. We discuss implications for the study of toxicity and content moderation as an emerging area of research in political science with critical implications for platforms, policymakers, and democracy more broadly.
https://www.rte.ie/eile/brainstorm/2019/0308/1035074-the-toxic-world-of-online-comments-and-social-media-posts/, 2019
A Facebook Like: "Pro-Trump bots contributed at least five times more online messages than pro-Clinton ones during the last US general election" More from UCC Opinion: new research shows the extent to which organised groups are using online comments and social media into theatres of war
2020
The convenience of social media has also enabled its misuse, potentially resulting in toxic behavior. Nearly 66% of internet users have observed online harassment, and 41% claim personal experience, with 18% facing severe forms of online harassment. This toxic communication has a significant impact on the well-being of young individuals, affecting mental health and, in some cases, resulting in suicide. These communications exhibit complex linguistic and contextual characteristics, making recognition of such narratives challenging. In this paper, we provide a multimodal dataset of toxic social media interactions between confirmed high school students, called ALONE (AdoLescents ON twittEr), along with descriptive explanation. Each instance of interaction includes tweets, images, emoji and related metadata. Our observations show that individual tweets do not provide sufficient evidence for toxic behavior, and meaningful use of context in interactions can enable highlighting or exonerat...
Social media and society, 2016
On the sixth anniversary of Twitter (March 2012), the late night talk show, Jimmy Kimmel Live, aired their first installment of the popular segment "Mean Tweets." In this piece, an assortment of celebrities read some of the most egregious insults Twitter users have directed at them-often riddled with expletives, name-calling, and accusations. While this segment may be humorous to watch, it also illustrates an important point about how social media platforms such as Twitter and other online discussion environments can sometimes elicit hostile communication. Online social networks like Twitter essentially allow the community at large the power to direct insults at whichever users they please from a distance and with little fear of retaliation or punishment. Especially with the widespread adoption of mobile phones and availability of data-enabled cellular networks, this form of online-mediated bullying can take place at any time or place at the whim of the connected user, who may be on either a genuine or anonymized pseudo-account. Of course, since the time when people first started communicating online, there has been an ongoing debate over the capacity for digital political communication to become hostile and polarize or silence participants (
ArXiv, 2021
The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of of...
First Monday, 2022
Intolerant versus uncivil: Examining types, directions and deliberative attributes of incivility on Facebook versus Twitter This study was an attempt to understand incivility, intolerance, and deliberative attributes on social media. Instead of solely focusing on incivility, this study distinguished incivility from intolerance and examined these two concepts in the context of public comments on two social media platforms. More specifically, in the study, we conducted content analyses to examine whether uncivil and intolerant comments vary based on platforms and topic sensitivity, as well as the relationship between uncivil and intolerant discourse and deliberative attributes. The results revealed that incivility occurred on both platforms but that a significant difference existed between Facebook and Twitter in terms of intolerant comments. The results also showed a positive relationship between topic sensitivity and intolerance. Finally, we found that Facebook discussions were 46 percent more likely than Twitter discussions to contain deliberative comments. Types and directions of incivility Incivility, intolerance, and democratic plurality Affordances, topic sensitivity, and network characteristics Methods Results Discussion and conclusions
Neurocomputing, 2021
Online platforms have become an increasingly prominent means of communication. Despite the obvious benefits to the expanded distribution of content, the last decade has resulted in disturbing toxic communication, such as cyberbullying and harassment. Nevertheless, detecting online toxicity is challenging due to its multi-dimensional, context sensitive nature. As exposure to online toxicity can have serious social consequences, reliable models and algorithms are required for detecting and analyzing such communication across the vast and growing space of social media. In this paper, we draw on psychological and social theory to define toxicity. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions.
2021
The current study aims to observe the triggers behind language aggression displayed on social media platforms. Aggressiveness is generally performed by individuals with a dominant attitude, high self-esteem and a positive mindset towards violence. It does not only develop due to personality traits, but also due to experienced trauma or domestic abuse, leading to antisocial tendencies and the absence of sympathy towards others, as well as the incompetence to comprehend another’s feelings and experiences. Aggressiveness can lead to repeated and constant acts of harassment intended to cause persistent trauma and fear in one’s life (Willard 2007a: 33). There is a significant difference between traditional and online bullying, as presence on the internet inevitably involves vulnerability and high chances of becoming a victim of harassment. Language aggression is a complex concept which is frequently encountered on the Internet. Online content and comments are commonly and intentionally obnoxious and hateful. Furthermore, offensive language is regularly used regarding one’s race, sex, belief, political insight, gender identity, and social status being thus the vehicle of online bullying (Bernstein et al. 2011 qtd. in Zimmerman 2012: 1). Therefore, the practical part focuses on categorizing the hate comments retrieved from Zoe LaVerne’s TikTok account according to the elaborated typologies: obscenity and indecency, swearing and cursing, irony and sarcasm, name-calling, blasphemy and profanity, and hate speech. The aim of my project is to show the different manifestations of language aggression. Keywords: language aggression, social media platforms, hate comments
ArXiv, 2021
The ability to quantify incivility online, in news and in congressional debates is of great interest to political scientists. Computational tools for detecting online incivility for English are now fairly accessible and potentially could be applied more broadly. We test the Jigsaw Perspective API for its ability to detect the degree of incivility on a corpus that we developed, consisting of manual annotations of civility in American news. We demonstrate that toxicity models, as exemplified by Perspective, are inadequate for the analysis of incivility in news. We carry out error analysis that points to the need to develop methods to remove spurious correlations between words often mentioned in the news, especially identity descriptors and incivility. Without such improvements, applying Perspective or similar models on news is likely to lead to wrong conclusions, that are not aligned with the human perception of incivility.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
arXiv (Cornell University), 2021
Proceedings of the Conference for Truth and Trust Online 2019
New Media & Society, 2017
PLoS ONE, 2020
Proceedings of the First Workshop on Abusive Language Online, 2017
Sage Open, 2020
arXiv (Cornell University), 2023
Are These Comments Triggering? Predicting Triggers of Toxicity in Online Discussions, 2020
Proceedings of the International AAAI Conference on Web and Social Media, 2018
Applied Network Science, 2021
New Media & Society, 2018
International Journal of Technoethics, 2017
Perspectives on Politics, 2021
International Journal of Computer Science and Engineering, 2024