Artificial intelligence systems like ChatGPT and Gemini don’t just process information, they also make “judgements” about users, according to a new study.
AI models are quietly shaping our world, with diverse fields integrating them into workflows to make day-to-day decisions about who gets hired, who receives bank loans and even who gets what medical advice. This makes it essential to better understand how these models arrive at critical decisions and how they differ from humans in this regard.
The new study shows that AI systems don’t just process information, they systematically “judge” people in ways that resemble human trust but with critical caveats.
Researchers analysed 43,000 simulated decisions made by modern AI models alongside about 1,000 made by humans. They found that models like OpenAI’s ChatGPT and Google’s Gemini did not simply process information, but made judgements about people and appeared to form something like “trust” about them.
However, this “trust” varied in crucial ways from the way humans trust each other.
In the study, AI models and human participants were given familiar decisions to make, like how much to lend a small business owner, whether to trust a babysitter, how to rate a boss or how much to donate to a non-profit founder.
Both AI and humans favoured people who seemed competent, honest and well-intentioned, hinting the models appeared to grasp the basic ingredients of trust such as competence, integrity and benevolence.
But while humans formed a general impression of other people by blending multiple traits into a single, intuitive and holistic judgement, AI appears to do something very different. It seems to follow a more rigid, “by-the-book” style of judgement that is consistent, but less human, breaking down people into scores on competence, integrity and kindness, almost like columns in a spreadsheet. Judgement by AI, researchers say, is more rigid and less nuanced, making biases harder to detect.
“People in our study are messy and holistic in how they judge others. AI is cleaner, more systematic and that can lead to very different outcomes,” explained Valeria Lerman, an author of the study published in the journal Proceedings of the Royal Society A.
The approach followed by AI models seemed to lead to a troubling pattern of amplified bias, scientists said.
For example, in financial scenarios, there were sizeable differences based solely on demographic traits, with older people frequently given more favourable outcomes.
“These divergences warrant careful attention when interpreting large language model trust-related outputs,” the study noted.
“Humans have biases, of course,” said Yaniv Dover, another author of the study. “But what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”
Moreover, there is no single “AI opinion” about the same people.
“Two systems can look similar on the surface but behave very differently when making decisions about people,” Dr Lerman said.
Researchers warn the question is no longer whether we can trust AI, but whether we understand how they trust us.
“These systems are powerful. They can model aspects of human reasoning in a consistent way. But they are not human, and we shouldn’t assume they see people the way we do,” Dr Dover said.
The Independent has reached out to Google and OpenAI for comment about the study.
Apple is making AI glasses, report claims
OpenAI unveils plan to open first permanent London office
AI startup offers humans ‘perfect and infinite memory’
The ‘terrifying’ AI superhacker that has the tech world on alert
Vodafone and Three to combine stores in major high street overhaul
Booking.com holiday site users warned that their personal data could have been stolen