TY - JOUR
T1 - The use of trigger warnings on social media
T2 - a text analysis study of X
AU - Vit, Abigail Paradise
AU - Puzis, Rami
N1 - Publisher Copyright: © 2025 Vit, Puzis. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2025/4/1
Y1 - 2025/4/1
N2 - Trigger warnings are placed at the beginning of potentially distressing content to provide individuals with the opportunity to avoid the content before exposure. Social media platforms use artificial intelligence to add automatic trigger warnings to certain images and videos, but are less commonly applied to textual content. This leaves the responsibility of adding trigger warnings to the authors, and a failure to do so may expose vulnerable users to sensitive or upsetting content. Due to limited research attention, there is a lack of understanding concerning what content is or is not considered triggering by social media users. To address this gap, we examine the use of trigger warnings in tweets on X, previously known as Twitter. We used a large language model (LLM) for zero-shot learning to identify the types of trigger warnings (e.g., violence, abuse) used in tweets and their prevalence. Additionally, we employed sentiment and emotion analysis to examine each trigger warning category, aiming to identify prevalent emotions and overall sentiment. Two datasets were collected: 48,168 tweets with explicit trigger warnings and 4,980,466 tweets with potentially triggering content. The analysis of the smaller dataset indicates that users have applied trigger warnings more frequently over the years and are applying them to a broader range of content categories than they did in the past. These findings may reflect users’ growing interest in creating a safe space and a supportive online community that is aware of diverse sensitivities among users. Despite these findings, our analysis of the larger dataset confirms a lack of trigger warnings in most potentially triggering content.
AB - Trigger warnings are placed at the beginning of potentially distressing content to provide individuals with the opportunity to avoid the content before exposure. Social media platforms use artificial intelligence to add automatic trigger warnings to certain images and videos, but are less commonly applied to textual content. This leaves the responsibility of adding trigger warnings to the authors, and a failure to do so may expose vulnerable users to sensitive or upsetting content. Due to limited research attention, there is a lack of understanding concerning what content is or is not considered triggering by social media users. To address this gap, we examine the use of trigger warnings in tweets on X, previously known as Twitter. We used a large language model (LLM) for zero-shot learning to identify the types of trigger warnings (e.g., violence, abuse) used in tweets and their prevalence. Additionally, we employed sentiment and emotion analysis to examine each trigger warning category, aiming to identify prevalent emotions and overall sentiment. Two datasets were collected: 48,168 tweets with explicit trigger warnings and 4,980,466 tweets with potentially triggering content. The analysis of the smaller dataset indicates that users have applied trigger warnings more frequently over the years and are applying them to a broader range of content categories than they did in the past. These findings may reflect users’ growing interest in creating a safe space and a supportive online community that is aware of diverse sensitivities among users. Despite these findings, our analysis of the larger dataset confirms a lack of trigger warnings in most potentially triggering content.
UR - http://www.scopus.com/inward/record.url?scp=105004296001&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0322549
DO - 10.1371/journal.pone.0322549
M3 - Article
C2 - 40305437
SN - 1932-6203
VL - 20
JO - PLoS ONE
JF - PLoS ONE
IS - 4 April
M1 - e0322549
ER -