Will AI Be Used to Raise Musicians From the Dead?

Earlier this month, 760 stations owned by iHeartMedia simultaneously threw their weight behind a new single: The Beatles’ “Now and Then.” This was surprising, because the group broke up in 1970 and two of the members are dead. “Now and Then” began decades ago as a home recording by John Lennon; more recently, AI-powered audio technology allowed for the separation of the demo’s audio components — isolating the voice and the piano — which in turn enabled the living Beatles to construct a whole track around them and roll it out to great fanfare

“For three days, if you were a follower of popular culture, all you heard about was The Beatles,” says Arron Saxe, who represents several estates, including Otis Redding’s and Bill Withers’s. “And that’s great for the business of the estate of John Lennon and the estate of George Harrison and the current status of the two living legends.”

Related

For many people, 2023 has been the year that artificial intelligence technology left the realm of science fiction and crashed rudely into daily life. And while AI-powered tools have the potential to impact wide swathes of the music industry, they are especially intriguing for those who manage estates or the catalogs of dead artists. 

That’s because there are inherent constraints involved with this work: No one is around to make new stuff. But as AI models get better, they have the capacity to knit old materials together into something that can credibly pass as new — a reproduction of a star’s voice, for example. “As AI develops, it may impact the value of an estate, depending on what assets are already in the estate and can be worked with,” says Natalia Nataskin, chief content officer for Primary Wave, who estimates that she and her team probably spend around 25% of their time per week mulling AI (time she says they used to spend contemplating possibilities for NFTs).

And a crucial part of an estate manager’s job, Saxe notes, is “looking for opportunities to earn revenue.” “Especially with my clients who aren’t here,” he adds, “you’re trying to figure out, how do you keep it going forward?”

Related

The answer, according to half a dozen executives who work with estates or catalogs of dead artists or songwriters, is “very carefully.” “We say no to 99 percent of opportunities,” Saxe says. 

“You have this legacy that is very valuable, and once you start screwing with it, you open yourself up to causing some real damage,” adds Jeff Jampol, who handles the estates of The Doors, Janis Joplin and more. “Every time you’re going to do something, you have to be really protective. It’s hard to be on the bleeding edge.”

To work through these complicated issues, WME went so far as to establish an AI Task Force where agents from every division educate themselves on different platforms and tools to “get a sense for what is out there and where there are advantages to bring to our clients,” says Chris Jacquemin, the company’s head of digital strategy. The task force also works with WME’s legal department to gain “some clarity around the types of protections we need to be thinking about,” he continues,  as well as with the agency’s legislative division in Washington, D.C. 

At the moment, Jampol sees two potentially intriguing uses of AI in his work. “It would be very interesting to have, for instance, Jim Morrison narrate his own documentary,” he explains. He could also imagine using an AI voice model to read Morrison’s unrecorded poetry. (The Doors singer did record some poems during his lifetime, suggesting he was comfortable with this activity.) 

Related

On Nov. 15, Warner Music Group announced a potentially similar initiative, partnering with the French great Edith Piaf’s estate to create a voice model — based on the singer’s old interviews — which will narrate the animated film Edith. The executors of Piaf’s estate, Catherine Glavas and Christie Laume, said in a statement that “it’s been a special and touching experience to be able to hear Edith’s voice once again — the technology has made it feel like we were back in the room with her.”

The use of AI tech to recreate a star’s speaking voice is “easier” than attempting to put together an AI model that will replicate a star singing, according to Nataskin. “We can train a model on only the assets that we own — on the speaking voice from film clips, for example,” she explains. 

In contrast, to train an AI model to sing like a star of old, the model needs to ingest a number of the artist’s recordings. That requires the consent of other rights holders — the owners of those recordings, which may or may not be the estate, as well as anyone involved in their composition. Many who spoke to Billboard for this story said they were leery of AI making new songs in the name of bygone legends. “To take a new creation and say that it came from someone who isn’t around to approve it, that seems to me like quite a stretch,” says Mary Megan Peer, CEO of the publisher peermusic. 

Outside the United States, however, the appetite for this kind of experimentation may differ. Roughly a year ago, the Chinese company Tencent Music Entertainment told analysts that it used AI-powered technology to create new vocal tracks from dead singers, one of which went on to earn more than 100 million streams.

Related

For now, at least, Nataskin characterized Primary Wave as focused on “enhancing” with AI tech, “rather than creating something from scratch.” And after Paul McCartney initially mentioned that artificial intelligence played a role in “Now and Then,” he quickly clarified on X that “nothing has been artificially or synthetically created,” suggesting there is still some stigma around the use of AI to generate new vocals from dead icons. The tech just “cleaned up some existing recordings,” McCartney noted.

This kind of AI use for “enhancing” and “cleaning up,” tweaking and adjusting has already been happening regularly for several years. “For all of the industry freakout about AI, there’s actually all these ways that it’s already operating everyday on behalf of artists or labels that isn’t controversial,” says Jessica Powell, co-founder and CEO of Audioshake, a company that uses AI-powered technology for stem separation. “It can be pretty transformational to be able to open up back catalog for new uses.”

The publishing company peermusic used AI-powered stem separation to create instrumentals for two tracks in its catalog — Gaby Moreno’s “Fronteras” and Rafael Solano’s “Por Amor” — which could then be placed in ads for Oreo and Don Julio, respectively. Much like the Beatles, Łukasz Wojciechowski, co-founder of Astigmatic Records, used stem separation to isolate, and then remove distortion from, the trumpet part in a previously unreleased recording he found of jazz musician Tomasz Stanko. After the clean up, the music could be released for the first time. “I’m seeing a lot of instances with older music where the quality is really poor, and you can restore it,” Wojciechowski says.

Related

Powell acknowledges that these uses are “not a wild proposition like, ‘create a new voice for artist X!'” Those have been few and far between — at least the authorized ones. (Hip-hop fans have been using AI-powered technology to turn snippets of rap leaks from artists like Juice WRLD, who died in 2019, into “finished” songs.) For now, Saxe believes “there hasn’t been that thing where people can look at it and go, ‘They nailed that use of it.’ We haven’t had that breakout commercial popular culture moment.”

It’s still early, though. “Where we go with things like Peter Tosh or Waylon Jennings or Eartha Kitt, we haven’t decided yet,” says Phil Sandhaus, head of WME Legends division. “Do we want to use voice cloning technologies out there to create new works and have Eartha Kitt in her unique voice sing a brand new song she’s never sung before? Who knows? Every family, every estate is different.”

Additional reporting by Melinda Newman

Elias Leight

Billboard