“The news is … there is no news.” With those words, outside St Mary’s Hospital in London awaiting the birth of Prince George in July 2013, my reporting for the BBC went viral on the internet. My somewhat testy response to standing in the street with nothing to say had struck a chord with many. Not for what I was saying but the way I was saying it. The resigned look. The world-weary tone. The slight annoyance that four decades of reporting from around the globe had led to this moment. I couldn’t hide it. Viewers knew what I was thinking and feeling. Some were annoyed by it. Quite a few people appreciated it – because they could relate to it. Because they are human. And so am I.
“Fedha” is not human. Yes, the blond woman with light-coloured eyes, wearing a black jacket and a white T-shirt, looks human. She even sounds human. But this week she was introduced as the first presenter in Kuwait who works by artificial intelligence. “What kind of news do you prefer? Let’s hear your opinions,” she says in Arabic.
The 13-second video has generated a flood of reactions including, unsurprisingly, from television journalists. Self-preservation is not an instinct that Fedha would understand. She would not feel the threat posed by a presenter who could bring the news 24 hours a day, seven days a week, with no breaks, no holidays and no salary.
There’s a lot to commend the prospect to the bosses. AI newsreaders will be a lot less, well, trouble. They’ll skip the mispronunciations, the verbal cock-ups. The dramas over wardrobe malfunctions. The complaints over foundation colours in makeup, the stresses over too-weak hairspray. And that’s just the men. Then there’s the journalism.
To examine the drawbacks that may come with an AI newsreader, I thought I would turn the tables and ask Genie – the chatbot powered by ChatGPT. Genie told me that there are a few potential issues with AI newsreaders. “They may struggle with delivering the news in a way that is engaging and interesting to viewers,” he says.
I get that. It’s a problem that every television journalist faces every day. Formats can be changed, new graphics can be deployed and new studios built. If all else fails, a change of presenter can be introduced. For years I was told that audience research suggested that viewers weren’t as interested in who was presenting the news as people thought. I never believed the argument – particularly as it always seemed to be made during contract negotiations. As far as delivering the news is concerned, of course the face and voice behind it matters. I can think of quite a few newsreaders who, despite their humanity, still struggle to make the news engaging. But that’s a subjective, human, view.
My ChatGPT friend also tells me that there are concerns about the potential for AI newsreaders to be used to spread false or misleading information, as they may not be able to determine the accuracy or even the plausibility of the story’s source.
This is, I believe, the biggest threat of all. Not just in the delivery of news, but in its content. AI is already involved in the spread of “fake news” – and that will only get worse. Organisations such as the BBC and Sky (both previous employers) are alive to this and will have to show viewers more of how stories are put together. To reveal more of what goes on “under the bonnet”. At a time when trust in news providers is diminishing, the next few years threaten to be very challenging if that trust is to be regained.
For further reaction, I turned to Twitter. I asked humans what they thought of AI newsreaders. Most seemed opposed to the idea, but a common belief is that newsreaders on any channel tend to reflect the views of the company that employs them; that they read what’s put in front of them. Just as an AI newsreader would.
The final point that AI makes about AI newsreaders is perhaps the most enlightening. “He” points out that one of the main concerns is that they lack the human touch and emotion that human newsreaders bring to the table. This, he goes on, can make it difficult for viewers to connect with the news on a personal level.
What he’s talking about is what I would refer to as “tone”. In a 40-year career in broadcasting I have delivered some of the biggest stories. Whether it be war in Iraq, a terrorist attack or the death of a major figure, it’s not just the words that matter. You need to look right. You need to sound right.
During breaking news stories the presenter is often looking at pictures and reading details in real time. If the details are shocking then the person delivering them can often appear shocked themselves. There’s nothing wrong with this; it’s a perfectly human reaction. And there we are. Back to the main point. An AI newsreader cannot convey a reaction to a breaking story. They may appear totally calm under pressure and be free of the errs and umms – but words delivered robotically will soon lose any sense of gravitas. As will the “person” delivering them.
The same is true of lighter stories. How would an AI newsreader convey their contempt for a story about surfing dogs? Would they raise an eyebrow? Would they curl their lip and scowl? It worked for me. I went viral with that as well. Perhaps AI newsreaders won’t care about viewers’ reactions. As one rather cynical responder told me on Twitter: “The propaganda won’t be any different. I’d miss the chuckles though.”
Simon McCoy is a journalist and broadcaster who presented BBC News for 18 years