In this paper researchers show that standard machine learning can acquire stereotyped biases from textual data that reflect everyday human culture. The general idea that text corpora capture semantics including cultural stereotypes and empirical associations has long been known in corpus linguistics but their findings add to this knowledge in three ways.
First, they used word embeddings, a powerful tool to extract associations captured in text corpora; this method substantially amplifies the signal found in raw statistics. Second, their replication of documented human biases may yield tools and insights for studying prejudicial attitudes and behavior in humans. Third, since they performed their experiments on off-the-shelf machine learning components (primarily the GloVe word embedding), they show that cultural stereotypes propagate to Artificial Intelligence (AI) technologies in widespread use.