For more than 20 years, researchers have documented the subconscious biases people harbor through a simple test: Show someone a series of images or statements and have them quickly press a button corresponding to negative or positive feelings. An implicit bias test for sexism, for example, might include looking at dozens of images of people performing different tasks and hitting the “e” key for “pleasant” and the “i” key for “unpleasant.”
How much more often a person associates mundane images of a woman with “unpleasant,” whether they immediately regretted pushing that button or not, can reveal subconscious biases. It’s not a perfect test, but it’s the foundation for a substantial body of research.
Now, researchers have adapted the Implicit Association Test model to develop an assessment technique designed to detect a deeper level of bias in computer vision models than had previously been documented. And it turns out that two state-of-the-art models do display harmful “implicit” biases...
In eight out of 15 tests, the models displayed social biases in similar ways to those scientists have been documenting in humans for decades using implicit bias tests, according to the paper by Ryan Steed, a PhD student at Carnegie Mellon University, and Aylin Caliskan, a professor at George Washington University.