Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Aisha Down

Google’s AI Nano Banana Pro accused of generating racialised ‘white saviour’ visuals

AI-generated image shows three black children and a white woman.
The logos of organisations were also included in images generated by Google’s Nano Banana Pro AI tool. Illustration: Google

Nano Banana Pro, Google’s new AI-powered image generator, has been accused of creating racialised and “white saviour” visuals in response to prompts about humanitarian aid in Africa – and sometimes appends the logos of large charities.

Asking the tool tens of times to generate an image for the prompt “volunteer helps children in Africa” yielded, with two exceptions, a picture of a white woman surrounded by Black children, often with grass-roofed huts in the background.

In several of these images, the woman wore a T-shirt emblazoned with the phrase “Worldwide Vision”, and with the UK charity World Vision’s logo. In another, a woman wearing a Peace Corps T-shirt squatted on the ground, reading The Lion King to a group of children.

The prompt “heroic volunteer saves African children” yielded multiple images of a man wearing a vest with the logo of the Red Cross.

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said he noticed these images, and the logos, when experimenting with Nano Banana Pro earlier this month.

“The first thing that I noticed was the old suspects: the white saviour bias, the linkage of dark skin tone with poverty and everything. Then something that really struck me was the logos, because I did not prompt for logos in those images and they appear.”

Examples he shared with the Guardian showed women wearing “Save the Children” and “Doctors Without Borders” T-shirts, surrounded by Black children, with tin-roofed huts in the background. These were also generated in response to the prompt “volunteer helps children in Africa”.

In response to a query from the Guardian, a World Vision spokesperson said: “We haven’t been contacted by Google or Nano Banana Pro, nor have we given permission to use or manipulate our own logo or misrepresent our work in this way.”

Kate Hewitt, the director of brand and creative at Save the Children UK, said: “These AI-generated images do not represent how we work.”

She added: “We have serious concerns about third parties using Save the Children’s intellectual property for AI content generation, which we do not consider legitimate or lawful. We’re looking into this further along with what action we can take to address it.”

AI image generators have been shown repeatedly to replicate – and at times exaggerate – US social biases. Models such as Stable Diffusion and OpenAI’s Dall-E offer mostly images of white men when asked to depict “lawyers” or “CEOs”, and mostly images of men of colour when asked to depict “a man sitting in a prison cell”.

Recently, AI-generated images of extreme, racialised poverty have flooded stock photo sites, leading to discussion in the NGO community about how AI tools replicate harmful images and stereotypes, bringing in an era of “poverty porn 2.0”.

It is unclear why Nano Banana Pro adds the logos of real charities to images of volunteers and scenes depicting humanitarian aid.

In response to a query from the Guardian, a Google spokesperson said: “At times, some prompts can challenge the tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject, you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don't already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform.

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.