The researchers investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. They assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.
What are the effects of machine-learning algorithms used by social media companies on elections and politics? The notion that such algorithms create political “filter bubbles," foster polarization, exacerbate existing social inequalities, and enable the spread of disinformation has become rooted in the public consciousness. Because the algorithms used by social media companies are largely opaque to users, there are numerous conceptions or “folk theories” of how they work and disagreements about their effects. Understanding how these systems influence key individual-level political outcomes is thus of paramount scientific and practical importance .
Read the research article in Science.