Controlling the spread of COVID-19 misinformation and its weaponization against certain demographics (e.g. anti-Asian) -- in particular, by the online hate community of neo-Nazis and other extremists -- is now an urgent problem [1-6]. In addition to undermining public health policies, malicious COVID-19 narratives are already translating into offline violence. Making matters worse, each social media platform is effectively its own universe, i.e. a commercially independent entity subject to particular legal jurisdictions, and hence can at best only control content in its universe. Moreover, there is now a proliferation of other, far less moderated platforms thanks to open-source software enabling decentralized setups across locations.
Winning the war against such malicious matter will require an understanding of the entire online battlefield and new policing approaches that do not rely on global collaboration between social media platforms. Here we offer such a combined solution. Specifically, Figs. 1 and 2 show how COVID-19 malicious content is exploiting the existing online hate network to spread quickly between platforms and hence beyond the control of any single platform (Fig. 1A,B). Methods and Supplementary Information (SI) give details and examples of this material. Links between distinct platforms (i.e. universes) act like wormholes to create a huge, decentralized multiverse that connects hate communities (nodes with black circles, Fig. 1B) to the mainstream (nodes without black circles, Fig. 1B). Figure 1B involves ~10,000,000 users across languages and continents who have formed themselves into ~6,000 inter-linked public clusters, i.e. online communities such as a Facebook page, VKontakte group, or Telegram channel, each represented as a node in Figs. 1,2. These new insights inform the policy prescriptions offered in Fig. 3.