A Positive Sign That the DSA Is Working?

 

As we prepare for the second phase of the European Observatory of Online Hate, our team has continued gathering and analysing social media data.

In our latest data-set, ranging from September 2023 to May 2024, we collected 90 million messages through our monitoring tool. In our previous report, “Online Hate Speech 2023,” we documented a concerning rise in online toxicity throughout the first three quarters of 2023. However, recent trends offer a glimmer of hope.

 
 

Evolution of toxicity

For the first time since 2022, we observed a notable decrease in online toxicity in the first quarter of 2024, starting in February. This coincides with the full implementation of the Digital Services Act (DSA), European legislation designed to create a safer, more transparent online environment. On February 17, 2024, the enforcement came into play. Among its various measures, the DSA empowers users to flag and report illegal content directly to platforms. This regulatory oversight appears to have a tangible impact on reducing harmful online behaviour.

 
 

Mainstream vs. fringe

Under the new DSA regulations, the European Commission designated 17 platforms as Very Large Online Platforms (VLOPs), mandating stricter content moderation policies starting August 25, 2023. Our analysis reveals a significant discrepancy in toxicity levels between mainstream platforms (Facebook, X, Reddit, Instagram, TikTok) and fringe platforms (4chan, Gab, Minds). While fringe platforms continue to exhibit high toxicity scores (between 0.4 and 0.5), mainstream platforms have seen a reduction, with scores approaching 0.1.

Why have mainstream platforms managed to reduce toxicity while fringe platforms have not? Could it be due to the greater public scrutiny and resources available to mainstream platforms? Or is it because fringe platforms attract users seeking fewer restrictions and minimal moderation? These questions highlight the complex dynamics at play in the digital landscape under the DSA.

 
 

Observations and implications

Using our monitoring tool, we have observed a decrease in average toxicity and a reduction in high toxicity scores on mainstream platforms from the enforcement of the DSA onwards. This raises important questions about the potential positive impact of the DSA’s enforcement on curbing illegal online hate speech on mainstream platforms. 

Moreover, the differentiation in toxicity levels between mainstream and fringe platforms suggests that the DSA’s impact is more pronounced where there is greater regulatory compliance and oversight. Mainstream platforms, equipped with more resources and advanced moderation tools, can implement the DSA’s measures more effectively. On the other hand, fringe platforms, where the focus on moderation is less and/or with the purpose to stretch freedom of speech, show limited change, indicating that the DSA’s enforcement might need to adapt strategies for these environments.

However, further research is needed to verify and expand upon these initial observations. It is crucial to investigate whether these trends persist over a longer period and understand the specific mechanisms through which the DSA influences online behaviour. Additionally, studying the user experience and feedback on these platforms can provide insights into the efficacy and areas for improvement in the DSA’s implementation.

As we continue to monitor and analyse the evolving digital landscape, these findings underscore the importance of robust regulatory frameworks like the DSA in creating safer online environments. 

 
Next
Next

Reporting Models of Online Hate