News Corp Australia’s use of artificial intelligence to produce thousands of weekly information articles for “a number of years” has blindsided staff, prompting questions of management over the technology’s adoption across the company.
In a letter to management last Friday, the national house committee said staff were alarmed by comments made in June by company chair Michael Miller in an address to the World News Media Congress in Taipei, in which he said News Corp’s local publishers were using AI to produce more than 3000 hyperlocal articles a week.
The letter, written on behalf of editorial staff and seen by Crikey, called for answers to questions about which other areas of the company use AI and how, as well as further segments of the business management plans to roll out the use of AI in the “foreseeable future”. It also asked management to rule out job cuts as a result of the technology’s adoption.
“The use of AI in News Corp should be a discussion all journalists in this organisation are included in, not something revealed by News Corp at the World News Media Congress in Taipei,” the letter reads.
“New technologies have a place in supporting good, accessible journalism, but it is crucial that implementation processes are transparent, ethical and done in consultation with journalists and readers.”
Some editorial staff at News Corp Australia were unaware the technology was being used until Guardian Australia highlighted Miller’s remarks in Taipei last week.
At the conference, attended by a grab-bag of global media executives, Miller said News Corp’s Australian business was using AI to supplement its local journalism, which accounts for more than 55% of the local subscription base, by automating the production of stories related to weather, fuel prices and traffic conditions.
According to News Corp Australia, using AI to deliver that kind of information with the oversight of journalists is nothing new. It has been given until August 11 to respond to the letter.
“For a number of years we have used a program to automatically generate our service information,” a News Corp Australia spokesman told Crikey.
“Every word published is overseen by working journalists using only trusted and publicly available sources. We are proud of their work delivering this important service journalism to their communities such as updating local fuel prices several times daily as well as court lists, traffic and weather, death and funeral notices.”
It said ChatGPT “or other similar technologies” are not being used to produce the hyperlocal news items, as its parent company is reportedly engaged in talks with several other large publishers about forming a coalition to address the impacts of AI on the industry.
The talks could see News Corp team up with The New York Times, Politico owner Axel Springer, Vox Media, and Condé Nast parent company Advance to fight a perceived existential threat posed by AI to the news business. Sources quoted by The Wall Street Journal, however, said it’s not a sure thing.
Among the leading threats eyed by executives is a potential rupture in search traffic that could come as a result of AI-driven search engine redesigns, such as the coming integration of generative AI into Google search. The Google search engine facelift promises to surface snippets of information in direct response to user queries, in addition to the customary pages of links, which could carve a substantial hole in search traffic for publishers.
News leaders, including News Corp chief executive Robert Thomson, are also concerned that their companies aren’t being fairly compensated for the way their content is used to train artificial intelligence models, including ChatGPT. (OpenAI, the company responsible for ChatGPT, recently released instructions for site owners interested in preventing its AI engine from crawling their sites.)
News Corp Australia isn’t alone in openly experimenting with AI, but it appears to be the largest publisher leaning on the technology for a significant portion of its publishing output.
In May, Financial Times editor Roula Khalaf told readers the newspaper would “explore the use of AI-augmented visuals” — including infographics, diagrams and photos — but ruled out using it for reporting and writing journalism, which she said would continue to be carried out by humans.
The following month, The Guardian’s editor-in-chief Katharine Viner and CEO Anna Bateson said it would use AI editorially “only where it contributes to the creation and distribution of original journalism”, adding that readers would be alerted to its use when a story includes AI-generated content.
The Media, Entertainment and Arts Alliance (MEAA), which represents many of the workers at News Corp Australia, was caught off-guard by Miller’s remarks in Taipei and called for “caution and consultation” of AI’s adoption throughout Australian newsrooms.
On the release of its draft AI position paper released in March, the MEAA said media professionals “must have a say” in the decisions taken by publishers to integrate the technology into newsroom workflows. Federal president Karen Percy said at the time that “a balance must be struck” between making the most of AI’s promise while acknowledging the “unique threats” it poses to the public’s trust in news.
“Responsibly designed, AI has the potential to usefully supplement, extend and enhance our work, but it also has far-reaching consequences that need careful consideration, consultation and regulation,” Percy said.
“AI can provide efficiencies and new opportunities for storytelling techniques, but it also has the potential for errors and ethical breaches, reduced editorial independence and control and reduced job opportunities for media workers.”
Does Australia need greater safeguards against the rise of AI? Will we see a fundamental shift in the media? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.