AI Systems Don’t Just Reflect Our Biases, They Amplify Them

Humans are naturally biased. Our brains love shortcuts, so we rely on patterns, experiences, and even unconscious influences to make decisions. This can be useful, but it can also be unfair and result in prejudices and irrational judgments.
When you add artificial intelligence into the mix, human biases can get even more complex. From job applications to facial recognition technology, AI has a way of influencing people to pick favorites. At the same time, AI is only as unbiased as the data it learns from.
According to new research, AI systems don’t just reflect our biases—they make them stronger, which causes humans to gradually become more biased over time.
Researchers from University College London and MIT found that small biases can become larger after regular human-AI interaction.
The effect was much stronger than what takes place when humans interact with other humans, suggesting that we process and internalize AI-generated information differently.
“People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data,” said Tali Sharot, a co-lead author of the study.
“AI then tends to exploit and amplify these biases to improve its prediction accuracy.”
The research team investigated the phenomenon through a series of experiments involving a total of 1,401 participants.
In one test, participants were asked to look at groups of 12 faces that were displayed for half a second. They then had to judge whether the faces appeared to be more happy or sad.

Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.
Initially, the participants showed a small bias and classified faces as sad about 53 percent of the time. Next, an AI system called a Convolutional Neural Network was trained on these human judgments. It increased the bias significantly, deeming faces as sad about 65 percent of the time.
New participants were gathered to interact with the biased AI system and began to acquire its skewed perspective.
When the participants disagreed with the AI’s judgment, they changed their minds almost 32.72 percent of the time.
Meanwhile, they only changed their minds about 11.27 percent of the time when interacting with other humans. The effect seemed to be consistent across all tests.
Aside from facial expressions, participants completed tests that involved judging the direction of dots moving across a screen and assessing other people’s performance on tasks.
The researchers discovered that participants were more likely to overestimate the performance of men after interacting with an AI system that had been intentionally programmed with gender bias.
“Not only do biased people contribute to biased AIs, but biased AI systems can alter people’s own beliefs so that people using AI tools can end up becoming more biased in domains ranging from social judgments to basic perception,” said Dr. Moshe Glickman, a co-lead author of the study.
The findings demonstrate that AI bias perpetuates a cycle where human and machine biases reinforce each other.
It is essential to understand this relationship between humans and AI as we continue to integrate AI systems into fields like healthcare and criminal justice.
The study was published in Nature Human Behavior.
More About:News