There's more to content moderation than deplatforming.
Florida Gov. Ron DeSantis' signing of a bill that penalizes social media companies for deplatforming politicians was yet another salvo in an escalating struggle over the growth and spread of digital disinformation, malicious content and extremist ideology. While Big Tech, world leaders and policymakers — along with many of us in the research community — all recognize the importance of mitigating online and offline harm, agreement on how best to do that is few and far between.
Big tech companies have approached the problem in different ways and with varying degrees of success. Facebook, for example, has had considerable success in containing malicious content by blocking links that lead to domains characterized by disinformation and hateful content, and by removing keywords from its search engine index that link to hate and supremacist movements. Additionally, Facebook and Twitter have both deplatformed producers and purveyors of malicious content and disinformation, including, famously, a former U.S. president.