Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Killian Fox

Human rights lawyer Susie Alegre: ‘If AI is so complex it can’t be explained, there are areas where it shouldn’t be used’

Susie Alegre.
‘We don’t need to believe that it’s all inevitable and omnipotent’: Susie Alegre. Photograph: Tugce Nelson

Susie Alegre is an international human rights lawyer and author, originally from the Isle of Man, whose focus in recent years has been on technology and its impact on human rights. As a legal expert she has advised Amnesty International, the UN and other organisations on issues such as counter-terrorism and anti-corruption. Her first book, Freedom to Think, published in 2022 and shortlisted for the Christopher Bland prize, looked at the history of legal freedoms around thought. In her new book Human Rights, Robot Wrongs, she turns her attention to the ways in which AI threatens our rights in areas such as war, sex and creativity – and what we might do to fight back.

What prompted you to write this book?
There were two triggers. One was the sudden explosion of ChatGPT and the narrative about how everyone can be a novelist now and there’s going to be no need for human creators, because AI will be able to do it all for us. It felt utterly depressing. The second was the story about a Belgian man who took his own life after a six-week intensive relationship with an AI chatbot. His widow felt that, without this relationship, which distorted his worldview, he would have still been there for her and for his children. That triggered me to think – well, this is absolutely about the right to life; to family life, to freedom of thought and freedom from manipulation. And how are we thinking about AI and the really severe ways that it’s impacting our human rights?

You don’t give much credence to the threat of an AI apocalypse.
I think that’s a distraction. What we need to be worried about is putting limits on how AI can be developed, sold and used by people. And ultimately, there are people behind the technology, in the design phase and particularly in the marketing, and also in the choices that are being made about how it’s used.

Everything we’re hearing about AI suggests that it’s advancing at incredible speed, and that the models operate at levels of complexity that even their creators can’t grasp. How can regulators ever hope to keep up?
I think there’s an awful lot of smoke and mirrors. It’s like in The Wizard of Oz, when Toto pulls back the curtain and we see what’s going on behind. So we don’t need to believe that it’s all inevitable and omnipotent. We can still make choices and ask questions. Also, if something is so complex that it can’t be explained, then there are certain areas where it shouldn’t be used.

Do you think the existing legal systems and human rights charters are up to the task of dealing with AI, or do we need to create a new framework?
I don’t think we need a new framework, but what we really need is access to justice. There may well be certain legal avenues that need to be developed. But one of the really fundamental challenges is, how do you push back? How do you enforce regulation? And that’s what we’ve seen in relation to some big tech companies: their activities are found to be unlawful, they’re issued with huge fines, and they still carry on.

You have a very interesting chapter on sex robots and chatbots. What are the main concerns?
This was a whole area that I hadn’t really thought about before and I was quite horrified to realise how widespread the use of AI bots to replace human companionship is. The reason it worried me is because this is private-sector technology that’s being inserted into people’s lives to replace human relationships, and that is very dangerous in terms of social control. It’s not a question of morality but rather, what does this mean for human society and our ability to cooperate and connect?

Isn’t AI good news for people who can’t afford legal representation?
It depends. If you’re talking about very basic disputes where it’s just about knowing the rules, technology can improve access. But when you’re looking at more complex questions, the problem is that generative AI doesn’t actually know what the law is and it could well be giving you a load of old rubbish – and when something is delivered by a machine in an authoritative tone, it’s very hard for people to doubt it.

What happened when you asked ChatGPT: “Who is Susie Alegre?”
It said Susie Alegre didn’t exist, or at least didn’t appear on the internet. I felt a bit cheesed off, given that my first book had come out a year earlier. I asked it, who wrote Freedom to Think, and the first person it came up with was a male biologist. I asked it again and again and it came up with 20 different names, all of them men except for one. It was as if, for ChatGPT, the idea that a woman would have written a book about thought was absolutely unthinkable.

What’s your opinion on the “fair use” defence that chatbot companies are using to defend hoovering up words and images to feed their AI?
I’m not an American copyright lawyer so I don’t really have expertise on that, but I think it’s going to be very interesting to see how cases pan out in different jurisdictions. The US has a very different approach to almost anywhere in the world on questions of freedom of expression and how that’s been used to support developments in the tech industry. Regardless of the legality of the “fair use” defence, it raises enormous questions for the future of human creativity, journalism and the information space. And underlying that is the basic problem of the massive decrease in pay for creators – the general trajectory has been towards depriving creators of economic incentives.

If everything you propose in the book in terms of regulation came to pass, isn’t there a danger that it would stifle innovation?
That is a bit of a straw man, the idea that regulation stifles innovation. What regulation does is make innovation develop in a certain direction and shut off directions that would be extremely harmful. In fact, I think there is the opposite risk, that if you allow AI to dominate in ways that undermine our ability to think for ourselves, to claim back our attention, we will lose the capacity to innovate.

You point out that AI isn’t just some benign cloud floating above our heads. It’s based on material extraction and the exploitation of workers, mainly in the global south, and it’s incredibly polluting to run. But so much of this is hidden from view. How do we go about tackling these impacts?
It is a huge question. One way of dealing with it is by looking at the question of AI adoption from an ESG [environmental, social and governance] perspective. All of the equipment that we use, the phones that we’re talking on now, are built from minerals often taken from conflict regions, including with child labour. Being aware of that hopefully can help shift societal demands and consumer habits. You can use generative AI to make a hilarious meme, but how much water and energy are you expending? Couldn’t you just pick up a pencil, and might that actually be more satisfying?

Do you sometimes wish that AI could be put back on the shelf?
It’s not an all-or-nothing equation between banning AI or embracing it into every aspect of your life. It’s a question of choosing what we want to use AI for. Being critical and asking questions doesn’t mean that you’re against AI: it just means you’re against AI hype.

  • Human Rights, Robot Wrongs by Susie Alegre is published by Atlantic Books (£12.99). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.