The hefty topic Laura Ellis addressed in AUT's 2025 Technology in Society series. Photo: Colin Peacock
Skinny Mobile has new ads mocked up like breaking TV news coverage, featuring animated character Jo who says she's such a satisfied customer she allowed the company to use an AI clone of herself to endorse it.
The cost-cutting use of AI fits the no-frills, low-cost telco's brand. But surveys show the public is sceptical about machine-made journalism - and many people don't trust it.
And it's happening anyway.
In the US a TV channel is currently using tech to bend the rules that exclude cameras from criminal proceedings. Law and Crime is aring AI recreations of the Sean 'Diddy' Combs trial on YouTube channel.
It's based on court transcripts, and technically it's a true and accurate record of what happened in court. But it's also clearly synthetic.
Last year Mediawatch revealed the Weekend Herald had used AI to write some editorials. Publisher NZME conceded there had been insufficient human oversight, contrary to its policy.
The editorial policy at rival publisher Stuff was that any use of AI for news or images would be declared to readers. But in February, that requirement was quietly dropped from policy.
But use of AI also gives news consumers options. Some news websites offer an AI-driven option to listen to the text of an article. Usually that means a halting, robot-like North American voice will parrot the text back to you out loud, usually with quite a few errors.
But on the New Zealand Herald website, the 'listen' option on some content gives you a much better result, featuring the synthesised voice of journalist and presenter Susie Nordqvist.
And unlike off-the shelf AI audio tools usually used for this, it can handle te reo Māori terms and pronunciation.
AI gets OK from BBC boss
This week the top boss of the BBC said cutting-edge AI technology will be blended with BBC journalism for "a healthy core of fact-based news" in future.
In a lengthy-speech to staff and media leaders he said the UK's democratic future is at risk from social media platforms and disinformation fuelling a "trust crisis."
Tim Davie said the BBC could help reverse this and be "a unifier and in the online, AI age".
(Observers also saw it as the start of a political push to renew the BBC's public funding).
The BBC is already using AI for news to boost its reach, save money and redirect journalists' time and energy.
The BBC's head of technology forecasting speaking at AUT on 'Will AI save us?' Photo: Colin Peacock
Laura Ellis is the BBC's head of technology forecasting, who is responsible for identifying, understanding and taking advantage of emerging technology for broadcasting.
She was also a founder of [https://www.bbc.com/rd/articles/2023-10-media-provenance-watermarks-fingerprints-deepfake Project Origin, an effort to help audiences determine what they choose to trust.
But while her eye is now on technology for the future, her past was spent in BBC newsrooms when digital technology was first introduced.
She was in New Zealand this week for the AUT's Technology in Society series, and spoke about AI making or breaking the media.
She told Mediawatch her experience of deadline-driven news journalism has been a big help.
"In a news environment, you've got to do something the next hour or even the next minute. Tech's moving so fast these days that having the ability to react quickly is a superpower.
"I'm familiar with doing something which might not be perfect but which is going to do the job in the time available. We've had to become much more responsive as a result of the speed of change that we're seeing at the moment."
But haste makes mistakes more likely?
"Part of the trick of all this is that you have to accept that you're not going to know stuff and you're going to have to learn on the job - and really fast. There are new capabilities but we need to make sure that we dovetail them properly with the old capabilities which served us really well and still do.
"People who work in our AI research team and in R&D are assessing how we might use AI tools. We have a 'Responsible AI' team looking at the ethics around them. And also we have obviously a team of lawyers and data specialists who are looking at the terms and conditions... before we get stuck in.
"There is a strict rule that we don't use technology like generative AI without talking to the audiences about it. And there'd have to be a really good reason for it."
That doesn't mean labelling every piece of AI-infused BBC content. But relevant websites and apps have declarations and explanations. The BBC's many specialist magazines also carry regular articles about how AI is used to create the broadcast and online content - and by whom.
And the BBC does use AI in ways you might not expect.
The same but different
"You grant people anonymity sometimes in a TV programme with a big blob or pixelate their face. A deep-fake face-swap is a better way to do it.
"One of the most recent examples was a programme about Alcoholics Anonymous. People wanted to be anonymous, but wanted to have their stories told. One of the interviewees said: 'My face doesn't matter, what I look like doesn't matter. But what matters is the story that I'm telling."
Isn't that like using an actor - and could alter the audience response?
"So we were very, very clear in the script as well as putting something on screen. I think we'll gradually get more used to it.
"We have a weather product where the weather is voiced for your area. There are 3000 areas, so it's done by a synthetic voice. It's really hard to declare a synthetic media in audio at the start of every single weather update. It would be really tedious.
"But there's what we call an indirect transparency - where you put something on the app, write blogs about things like synthetic voices and how we do it - to make sure that we give people every chance of understanding.
"If you're using it to generate some news headlines it can be very quick to have AI generate 10 headlines and you choose from them. It's saving a bit of time, but a human is very much in the loop.
"But I think when it's something like generating an entire blog series or podcast, then we'd need to demonstrate that transparency in quite an upfront way."
Ellis said the BBC uses AI to scan decades-worth of its natural history footage and it can find relevant images human researchers have missed or failed to log.
"We also have cameras running on a lot of nests. If you've got 25 cameras, you've got some poor researcher having to scroll through hours and hours of this.
"We trained AI so it could identify specific animals, and could tell you that there was something going on that was important. And there was some animal behaviour that we were able to then come to some conclusions on, which we might not have done had we not had a machine eye across that."
Last year the 98-year-old David Attenborough said he was "profoundly disturbed" by AI cloning his voice "having spent a lifetime trying to speak what I believe to be the truth".
But use of CGI and digitally-enhancement of in BBC natural history programs has been controversial in the past. Use of AI in documentaries is bound to be questioned.
"AI does gives us another set of challenges. You could use AI to absolutely beautifully alter an image so that you could see mountains in the background and insects in the foreground - which you couldn't do with a normal camera. How we declare that is a really interesting question," Ellis said.
Bad timing for powerful tech
Ellis says not everything the BBC has done with AI has worked out well.
Apple suspended an AI service in January after the BBC complained it made inaccurate summaries of BBC news headlines. A news alert branded with the corporation's logo said Luigi Mangione, accused of killing the UnitedHealthcare chief executive, Brian Thompson, had shot himself.
"Lots of us have suffered detriment from that sort of thing. If it's put out there looking like it's from us and it's not accurate, it hurts us in so many ways."
Ellis is well aware AI is already amping misinformation and she says it's bad timing that this technology is developing at a time of declining trust in media. But she says AI can also help to identify and counter it - and protect the news people do trust.
While AI advocates say journalists will be freed up by the power of AI, one Korean news anchor took it to a new level recently. AI synthesis allowed her to keep presenting the TV news live even when she was on holiday.
"It's an option. It's an option for you," Ellis told Mediawatch.
RNZ is currently seeking an AI chief. That is something to consider.