Quarterly Insights: Online Hate and Toxicity Trends (Q2 2025 Report)
This report is the third in a series providing a quarterly analysis of online harm trends. This edition examines patterns observed in April, May, and June 2025.
Data for April, May, and June was based on 12,29M messages in 26 languages, across 19 social media platforms, among which Reddit, X, 4chan, Gab, YouTube, Facebook, Thread and Instagram.
Toxicity over time
This timeline shows the average toxicity levels on social media during Q2 2025.
The average toxicity score remained stable at 0.20 throughout the quarter, with only minor fluctuations from day to day. A brief uptick was observed in early May, peaking at 0.22, before returning to steady levels for the remaining period.
Are you unsure about the terminology we use in this report? Visit our FAQ for more details.
Regional breakdown
This section underlines toxicity trends across four different European regions: Western Europe, Southern Europe, Northern Europe, and Eastern Europe.
During the period from April to June 2025, toxicity levels exhibited clear regional variations across Europe.
Western Europe consistently registered the highest average toxicity, fluctuating between 0.20 and 0.24 throughout the quarter.
Southern Europe followed closely, maintaining moderate toxicity levels between 0.19 and 0.22, with brief peaks in mid-April and early June.
Eastern Europe recorded lower overall toxicity, ranging from 0.11 to 0.15, with increases observed in mid-April and June.
Northern Europe remained the least toxic region, with values largely between 0.09 and 0.15.
VLOPs vs non-VLOPs
The following graph compares the average toxicity levels on Very Large Online Platforms (VLOPs) - including Facebook, X, YouTube, Instagram, and TikTok - with those on non-VLOPs such as 4chan, Gab, Reddit, and Thread, between April and June 2025.
4chan remained the clear outlier in toxicity, averaging around 0.40-0.45, reinforcing its status as the most extreme site in the dataset. It is worth noting that there is missing data between 15 and 25 April, corresponding to a near two-week outage of 4chan, which was offline during that period. This gap partly explains discontinuities in toxicity trends and should be considered when interpreting fluctuations during mid-April. Gab followed, with average toxicity between 0.25 and 0.30, suggesting a persistently hostile environment typical of fringe platforms. Occasional peaks above 0.30 occurred in early May and mid-June. Reddit maintained a moderate-but-elevated toxicity profile around 0.26-0.27. Its stability contrasts with Telegram, which showed greater variability - from as low as 0.08 to spikes above 0.40. Threads remained comparatively moderate, averaging between 0.22 and 0.26, with short-lived peaks in mid-May.
Among VLOPs, Twitter (X) recorded the highest average toxicity, maintaining a stable average between 0.21 and 0.22, with a peak of 0.5 increase on May 28. YouTube followed, maintaining a steady toxicity range of 0.17 to 0.19. Instagram showed moderate toxicity overall (about 0.15–0.17) but with noticeable variation between 10 and 27 May, when levels briefly rose and fell. Facebook recorded even lower toxicity, fluctuating between 0.13 and 0.15, marking it as one of the least toxic VLOPs in the dataset. Finally, TikTok consistently registered the lowest toxicity scores overall, between 0.09 and 0.12.
Overall, VLOPs exhibited notably lower and more stable toxicity baselines compared to non-VLOPs platforms, such as Gab, Reddit, or Telegram.
Hate speech by category
The data reveal distinct thematic overlaps across the six monitored baselines, highlighting how different hate narratives are shaped by intersecting discursive categories.
Antisemitic content shows the strongest association with racism (302.8%), religion (172.9%), and politics (152.7%), confirming its position as one of the most ideologically complex and virulent baselines. High levels of threats (74.5%) and ridicule (45.5%) further indicate that hostility is often expressed through both ideological and dehumanising language.
Anti-Muslim narratives are primarily defined by religious framing (303.8%) and racism (189.4%), closely mirroring antisemitic discourse but with a stronger religious component. Political (113.5%) and threat-related (74.3%) content also feature prominently, underscoring their highly intersectional and confrontational nature.
Anti-LGBTQ+ content is overwhelmingly driven by sexism (220.8%) and ridicule (45.4%), reflecting its strong gendered dimension and the prevalence of mocking or contemptuous expression. Obscenity (32.7%) and profanity (29.3%) are also markedly higher than in other baselines, illustrating the frequent use of vulgar or explicit language.
Sexism-related discourse is characterised by strong overlaps with sexism itself (194.7%), along with ridicule (55.5%), obscenity (31.0%), and threats (61.4%), revealing a combination of degrading and aggressive tones. Political (74.9%) and religious (42.1%) elements suggest that gendered hostility is also tied to broader ideological and moral debates.
Anti-refugee narratives are overwhelmingly political (173.3%) and racialised (143.8%), reinforcing their focus on migration, identity, and social cohesion. Elevated levels of threat (57.9%) and contempt (21.5%) highlight the aggressive and exclusionary framing often used in such discussions.
Anti-Roma discourse displays the strongest racial dimension (205.8%) after antisemitism, alongside notable shares of political (55.5%) and contemptuous (23.5%) content. While sexism (18.3%) and religion (36.0%) are less central, the data point to a persistent pattern of demeaning and stigmatising language.
Across all baselines, racism, religion, politics, and threats emerge as the dominant drivers of online toxicity. Antisemitic and anti-Muslim narratives show the highest ideological density, while anti-LGBTQ+ and sexist discourse are more strongly characterised by gendered and vulgar expression. In contrast, anti-refugee and anti-Roma content remain rooted in racialised and political hostility, reflecting enduring prejudices within migration-related debates.