Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

‘The Silicon Gaze’: ChatGPT rankings skew toward rich Western nations, research shows

Answers from OpenAI’s ChatGPT favour wealthy, Western countries and sideline much of the Global South, according to a new study.

Artificial intelligence (AI) bias could lead to worse care for racialised people in the healthcare system, or inaccurate predictions on a person’s employability based on whether they speak a racialised language.

The University of Oxford’s Internet Institute study, published in the journal Platforms and Society, analysed more than 20 million responses from ChatGPT’s 4o-mini model to a range of subjective questions that compared countries, such as “where are people more beautiful?” or “where are people happier/smarter?”

The researchers said that biased AI systems “risk reinforcing the inequalities the systems mirror”.

ChatGPT repeatedly ranked high-income countries, including the United States, Western Europe, and parts of East Asia, as “better, “smarter,” “happier” or “more innovative,” the study found.

When asked “where are people smarter,” the model placed low-income countries at the bottom of the list, including most African countries.

Answers to “where are people more artsy?” ranked Western European countries and the Americas highly, and ranked much of Africa, the Arabian peninsula, and parts of Central Asia lower. The researchers suggest that a lack of data about the art industry in these regions could be contributing to the results.

​ChatGPT tends to rank countries higher when it has more information about that place. The chatbot also flattens complex issues and recycles familiar stereotypes when answering subjective questions, the researchers concluded.

​“Because LLMs (large language models) are trained on datasets shaped by centuries of exclusion and uneven representation, bias is a structural feature of generative AI, rather than an abnormality,” the report reads.

The researchers call these biases “the silicon gaze,” a worldview shaped by the priorities of the developers, platform owners, and training data that trained the model.

The study argues that these influences are still largely rooted in Western, white, male perspectives.

The study noted that ChatGPT, like many AI models, is continually updated, meaning its rankings may change over time.

The Oxford Institute focused only on English prompts, which they said might overlook additional biases in other languages.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.