The struggle to control online hate speech has been a constant headache for big tech companies – but a team of mathematicians and physicists might have come up with a solution.
In a report published in Nature, scientists have shown how previous attempts to tackle hate speech groups have often strengthened overall ‘hate networks’, instead of weakening them as intended.
The researchers advocate for a new, more sophisticated approach, arguing that Big Tech’s failings in the battle to regulate online hate speech is having disastrous real-world consequences.
The team, led by Essex-born physicist Neil F. Johnson at George Washington University, used a sophisticated mathematical model to show that ‘policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish.’
Not restricted to individual networks or countries, the report argues that hate speech lives in clusters and paths, allowing ideology to travel down ‘hate highways’ and to shift location whenever just one of the clusters are targeted by a big tech platform.
When certain hate speech bad actors are removed, the overall network of hate speech ‘rewires’ itself and strengthens other connections. Removing toxic speech from one platform like Facebook just forces the network to migrate elsewhere, like a virus that can lay dormant in your body even after someone has recovered.
The interconnectedness and global nature of social media means platforms cannot just be considered in isolation, the study’s authors argue.
Previous strategies to tackle online hate speech and misinformation have included wholesale internet bans and automated content removals on individual platforms.