A leading AI expert has warned Rishi Sunak to focus on the “real” risks of the tech being racist, sexist or homophobic instead of “far-fetched” threats it could make humans extinct.
Dr Mhairi Aitken, an AI ethics fellow at the Alan Turing Institute, urged the Prime Minister not to let big tech companies from Silicon Valley lead discussions on AI.
She said there was a worrying trend of distracting away from the “very real” risks of AI through the use of “far-fetched hypothetical” dangers about AI.
Her comments follow a “sensationalist” warning from firms including OpenAI, which developed Chat GPT, and Google DeepMind which said that AI could lead to human extinction.
A host of big tech companies signed a letter last month arguing that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
In an interview with the Mirror, Dr Aitken said: “There are reasons big tech is pushing this narrative and that is because it is in their interest to distract from the real risks of AI today.”
She explained there are “deliberate efforts to make AI sound more complex” to create the impression big tech companies are the only ones who can explain it.
“Those arguments are partly about trying to close down public debate,” she said.
Dr Aitken, who specialises in AI public policy, said we need to be focusing on the “tangible impacts” AI is having right now.
“AI models are full of biases. They’re trained on datasets that have a lot of biases in them and then they produce biased outputs, harmful outputs, outputs that contain stereotypes,” she said.
“We have seen things like Chat GPT being used to deliver advice to mental health patients who are experiencing eating disorders. This is really dangerous stuff and we need to look at the risks and the safety of its use in these contexts.
“We also need to look at how it is creating misinformation or fake news, or how AI can be used to make photorealistic images or voices.”
Other examples she listed where AI’s biases can cause harm include in health systems where models are biased towards people with white skin or men.
Likewise she said AI used in police forces has been shown to implement existing racist police biases.
Similarly there are examples of image-generating platforms creating sexualised images of women but not men because AI uses existing datasets of women on the internet, she added.
Dr Aitken encouraged the Prime Minister, who is hosting the first global AI summit this autumn, to include evidence from impacted communities.
The press statement the government released on the summit included comments solely from big tech firms like Anthropic, Google DeepMind and Palantir.
Dr Aitken urged caution as she said big tech companies are driven by “commercial competitiveness”, recalling that Chat GPT was rushed out to the public without its risks being known.
“The summit needs to bring in people who have been working on AI ethics for many years - who were absent in the press release. It is troubling that big tech companies are the ones who are shaping these discussions because they have the big platforms to shout from,” she said.
“I really hope that the global AI summit, when it happens, centres the voices of civil society organisations, of impacted communities, of researchers who have been working in this area for a long time.
“If it does, that could be a great opportunity to really advance this field, but if it really focuses and prioritises the perspectives of big tech companies, that will be a really big concern.”
* Follow Mirror Politics on Snapchat , Tiktok , Twitter and Facebook .