Belgian AI company leads EU research on online hate
Textgain leads the launch of a new European research centre on social media hate speech
Antwerp, January 18 2021 – Belgian technology company Textgain will lead a new European research center on online hate, disinformation, and the development of ethical Artificial Intelligence (AI) for all EU language regions. The initiative is supported by the DG Justice of the European Commission in the Rights, Equality & Citizenship programme (REC).
All languages, all stakeholders
‘We are not focusing on any one particular political ideology, religious belief or ethnic group,’ says Textgain’s project manager Gijs van Beek. ‘We are interested in the increasingly outspoken toxic language being used on social media that drives real-life conflict and harm, how people express discontent and why.’
The new European Observatory of Online Hate (EOOH) will bring together all stakeholders – over 50 organizations and experts in total – along with transparent AI for all European languages. The aim is to get a grip on evolving online trends, after which experts can use the collected insights for early or reconciliatory countermeasures.
From tweets to deeds
The recent riots at the US Capitol again demonstrate how untethered online hate and disinformation can have real-world consequences. Democracies around the world are being challenged by a surge of polarizing worldviews, hate and conspiracy theories, also in the EU.
In 2016 the European Commission already agreed on a Code of Conduct with Facebook, Microsoft, Twitter and YouTube to counter online hate. In 2017 the Coordination Unit for Threat Analysis (CUTA) in Belgium also noted that removal of hateful content is only one part of the solution, advising more collaboration between different stakeholders. Since 2019, the EU has also called for the development of more transparent AI with respect to privacy, human dignity and freedom of expression.
Online hate and disinformation on the rise
Even today, jihadist groups are still active online, using social media to attract new followers. In turn, Islamophobic propaganda from far-right organizations has proliferated. These are just two examples of co-radicalization dynamics. Most recently, a worldwide pandemic has spurred a growth of online discord and anti-establishment, anti-science and anti-Semitic conspiracy theories.
Conspiracy theories about sinister pedophile networks, forced government vaccination, surveillance, 5G, and even alien reptiles disguised as politicians, are far from harmless at a time when worried citizens need for accurate information.
Surveillance bots?
Social media tech companies are inclined to use automatic detection tools to stifle online hate on their platforms, but the societal impact of these systems is not always clear: how do these systems make decisions and who is accountable? There is a need for more transparent technology.
Textgain’s analyst Olivier Cauberghs, a former law enforcement expert on radicalization: ‘Exposure to extremist language on social media has an influence on how people act in real life. For example, look at the US presidential election or the Christchurch shooting in 2019.’ ‘Here in Europe, the many languages are an additional challenge to form an overarching picture. We want to address that with a combination of human expertise and new technology, developed here in Europe and tailored to our democratic values and privacy regulations.’
Transparent AI
Over the course of the initiative’s R&D cycle, new ‘Explainable AI’ will be developed to analyze online trends in all European language regions, which human experts can then use for information gathering and advocacy campaigns with positive counternarratives.
Textgain and its partners Dare to be Grey (dtbg.nl), Hogeschool Utrecht (hu.nl) and PDCS (pdcs.sk) will reach out to and promote dialogue between more than 50 partner organizations, including European law enforcement and security agencies, policy makers, human rights organizations, and investigative journalism and citizen science initiatives.