Music is at the forefront of AI disruption, but NZ artists still have few protections

11:41 am today

First published on The Conversation

The Conversation
Image of the band Velvet Underground.

Image of the band Velvet Sundown. Photo: Spotify

Analysis: Was the recent Velvet Sundown phenomenon a great music and media hoax, a sign of things to come, or just another example of what's already happening ?

In case you missed it, the breakout act was streamed hundreds of thousands of times before claims emerged the band and their music were products of generative artificial intelligence (GenAI).

Despite the "band" insisting they were real, an "associate" later admitted it was indeed an "art hoax" marketing stunt. Much of the subsequent commentary was concerned with fairness - particularly that a "fake" band was succeeding at the expense of "real" artists.

But Velvet Sundown is only the most recent example in a long history of computer generated and assisted music creation - going back to the 1950s when a chemistry professor named Lejaren Hiller debuted a musical composition written by a computer.

By the 1980s, David Cope's Experiments in Musical Intelligence created music so close to the style of Chopin and Bach it fooled classically trained musicians.

Artist and composer Holly Herndon was highlighting a need for the ethical use and licensing of voice models and deepfakes several years before Grimes invited others to use AI-generated versions of her voice to make new music, and "Deepfake Drake" alarmed the major record labels.At the same time, music companies, including Warner, Capitol and rapper-producer Timbaland, have since inked record contracts for AI-generated work.

GenAI-powered tools, such as those offered by Izotope, LANDR and Apple, have become commonplace in mixing and mastering since the late 2000s. Machine learning technology also underpins streaming recommendations.

Creativity and copyright

Despite this relatively long history of technology's impact on music, it still tends to be framed as a future challenge. The New Zealand government's Strategy for Artificial Intelligence, released this month, suggests we're at a "pivotal moment" as the AI-powered future approaches.

In June, a draft insight briefing from Manata Taonga/Ministry for Culture & Heritage explored "how digital technologies may transform the ways New Zealanders create, share and protect stories in 2040 and beyond".

It joins other recent publications by the Australasian Performing Rights Association and New Zealand's Artificial Intelligence Researchers Association, which grapple with the future impacts of AI technologies.

One of the main issues is the use of copyright material to train AI systems. Last year, two AI startups, including the one used by Velvet Sundown, were sued by Sony, Universal and Warner for using unlicensed recordings as part of their training data.

It's possible the models have been trained on recordings by local musicians without their permission, too. But without any requirement for tech firms to disclose their training data it can't be confirmed.

Even if we did know, the copyright implications for works created by AI in Aotearoa New Zealand aren't clear. And it's not possible for musicians to opt out in any meaningful way.

This goes against the data governance model designed by Te Mana Raraunga/Māori Sovereignty Network. Māori writer members of music rights administrator APRA AMCOS have also raised concerns about potential cultural appropriation and misuse due to GenAI.

Recent research suggesting GenAI work displaces human output in creative industries is particularly worrying for local musicians who already struggle for visibility. But it's not an isolated phenomenon.

In Australia, GenAI has reportedly been used to impersonate successful, emerging and dead artists. And French streaming service Deezer claims up to 20,000 tracks created by GenAI were being uploaded to its service daily.

Regulation in the real world

There has been increased scrutiny of streaming fraud, including a world-first criminal case brought last year against a musician who used bots to generate millions of streams for tracks created with GenAI.

But on social media, musicians now compete for attention with a flood of "AI slop", with no real prospect of platforms doing anything about it.

More troublingly, New Zealand law has been described as "woefully inadequate" at combating deepfakes and non-consensual intimate imagery that can damage artists' brands and livelihoods.

The government's AI strategy prioritises adoption, innovation and a light-touch approach over these creative and cultural implications. But there is growing consensus internationally that regulatory intervention is warranted.

The European Union has enacted legislation requiring AI services to be transparent about what they have trained their models on, an important first step towards an AI licensing regime for recorded and musical works.

An Australian senate committee has recommended whole-of-economy AI guardrails, including transparency requirements in line with the EU. Denmark has gone even further, with plans to give every citizen copyright of their own facial features, voice and body, including specific protections for performing artists.

It's nearly ten years since the music business was described as the "canary in a coalmine" for other industries and a bellwether of broader cultural and economic shifts. How we address the current challenges presented by AI in music will have far-reaching implications.

Dave Carter is an Associate Professor at the School of Music and Screen Arts, Te Kunenga ki Pūrehuroa - Massey University

Jesse Austin-Stewart is a lecturer at the School of Music and Screen Arts, Te Kunenga ki Pūrehuroa - Massey University

Oli Wilson is a Professor & Associate Dean Research at the College of Creative Arts, Te Kunenga ki Pūrehuroa - Massey University

This story was originally published on The Conversation.

Get the RNZ app

for ad-free news and current affairs