Attack of the Clones: AI Soundalike Tools Spin Complex Web of Legal Questions for Music (Guest Column)

As creators and rightsholders think about how to respond to the threats and opportunities represented by generative artificial intelligence, recordings made with the technology are already causing a stir. The biggest one yet: “Heart On My Sleeve,” a recording by the creator Ghostwriter that features what seems to be AI-generated imitations of vocals by Drake and The Weeknd — and amassed over 600,000 Spotify streams before being yanked from the platform

Since then, tracks that use AI-synthesized voices to imitate famous artists have flooded streaming services and social media, triggering a fierce debate among artists, industry executives and fans alike. Universal Music Group fired back, insisting in a statement on the responsibility of platforms to protect artists from such exploitation. While some legal opposition to these tracks pertains to how AI tools “learn” from copyrighted works at the “training” stage, our focus is on the issues surrounding the “output” created using these AI tools and the potential protections for artists and labels against soundalike recordings in both US and UK law.  

Related

Copyright has long been the main way that artists and labels protect their creative and economic investment in music. But neither US nor UK copyright law protects a performer’s voice, tone or unique singing style. AI soundalike recordings are generally not created using samples or snippets of existing recordings but instead are generated independently with AI tools that have learned to reconstruct a particular voice. Thus, owning the copyright to the original sound recording is of little help since the newly recorded material is not a direct copy.   

For the same reason, UK performers’ rights, which protect the exploitation of artists’ performances, are unhelpful since the most relevant restricted act is making a copy of the recording of a performance, rather than the performance itself. Since AI systems usually do not copy parts of an input recording to generate the content they output, the performer’s right is not implicated. 

Since copyright falls short, can artists rely on rights of publicity, which safeguard a celebrity’s name, image, likeness and sometimes voice, and other unique personal attributes from unauthorized commercial use? 

In the UK, there is no codified law of publicity rights. Instead, a patchwork of statute, common law, intellectual property rights and privacy protections need to be considered to protect a person’s “image.”

Related

Most relevant to unauthorized emulation of an artist’s voice using AI technology is the tort “passing off.” Unlike privacy rights, passing off is principally concerned with protecting the commercial value of an individual’s reputation, and it’s probably the closest thing that UK law has to a traditional right of publicity.  

Passing off is notoriously difficult to prove, and doing so would require an artist to establish that he has appropriate goodwill and demonstrate both that there has been a misrepresentation caused by the unauthorized synthesized performance and that it will cause the artist damage. Arguably, however, this would be hard to prove if the creator of a new recording made clear that it was not the work of a given artist but rather an AI performance (since then there would be no misrepresentation). Could this be the start of a string of “Not [artist]” releases? And while passing off may theoretically protect popular artists with particularly distinctive voices, it would be much more difficult for lesser-known artists to demonstrate that a sufficient degree of notoriety or “goodwill” is attached to their voice.  

In the United States, there is no federal law governing rights of publicity. Instead, a patchwork of state legislation and common law make for a blurry legal landscape, with many states having underdeveloped laws on the issue. Until 1988, it seemed clear that a mere vocal imitation did not infringe on a celebrity’s rights of publicity. However, in a landmark case in 1988, the Court of Appeals for the Ninth Circuit held that Ford Motor Company misappropriated singer Bette Midler‘s distinctive voice when it hired one of her former backup singers to imitate her performance of a song for use in a TV commercial. The court rejected Midler’s claim under California’s rights of publicity statute California Civil Code §3344, holding that the statute only protects against the misappropriation of one’s actual voice (as opposed to an imitation), but it allowed Midler to maintain a claim under common law. Four years later, in Waits v. Frito-Lay, Inc., the Ninth Circuit confirmed that “when voice is a sufficient indicia of a celebrity’s identity, the right of publicity protects against its imitation for commercial purposes without the celebrity’s consent,” and clarified the common law rule that for a voice to be misappropriated, it must be (1) distinctive, (2) widely known and (3) deliberately imitated for commercial use.  

Related

While these decisions are potentially helpful for providing a framework to fight AI-powered soundalikes, major questions remain. Will artists be able to sue under California Civil Code §3344 for synthetic voice usage in cases where the AI was trained on their recordings, allowing for the recovery of attorneys’ fees, or will they be limited to common law claims without fee recovery? Likewise, does the “widely known” requirement enable producers and DJs to clone unknown singers’ voices and use them in tracks without fear of liability? 

Notably, the First Circuit and New York courts had at one point rejected extending New York’s statutory right of publicity law to cover soundalikes. But “voice” has since been added to New York’s private cause of action for a violation of the right of publicity, although it was not added to the criminal arm of the statute.

Adding to the complexity, the range, duration and accessibility of post-mortem rights of publicity differ significantly compared to those for living individuals. Depending on an artist’s domicile at the time of death, the artist may not possess any post-mortem rights of publicity. That means estates would lack the authority to prevent AI-generated imitation of the artist’s voice from being used in a commercial context.

The Lanham Act, a federal law that most commonly applies in connection with trademark, may also prove useful to protect against soundalike artists using a famous artist’s voice in their work. One of the principal aims of the act is to protect against unfair competition among commercial parties, and Section 43(a) prohibits the use of any symbol or device that is likely to deceive consumers about the association, sponsorship, or approval of goods or services by another person. In its first application to a voice misappropriation case, the Waits court adopted an expansive view of the type of protectable symbols or devices that can underpin a false endorsement claim, ruling that the unauthorized imitation of an entertainer’s unique voice is actionable even if a trademark was not infringed.

Related

The Lanham Act’s applicability to AI soundalikes hinges on whether an imitation is likely to mislead consumers about the original artist’s association, sponsorship or approval of the new work. If the AI-generated voice causes confusion, the act could be used to protect artists’ rights. However, liability might be avoided if AI soundalike artists explicitly disclaim in their recordings, titles, or marketing materials that the tracks are not by the artist whose voice it borrows.

Should a false endorsement claim prove successful under the Lanham Act, remedies could include injunctions, actual damages, defendant’s profits attributable to the violation, costs of the action and, in exceptional cases, recovery of attorney’s fees.

While most voice misappropriation cases involve soundalikes in a purely commercial context — i.e. to sell products — it’s unclear whether courts will extend rights of publicity and Lanham Act claims to the use of soundalikes in original music, which, as a form of creative expression, would receive stronger First Amendment protections than pure commercial speech. 

Many scholars have also suggested that the Copyright Act should preempt rights of publicity and Lanham Act claims altogether when the allegedly infringing material is expressly authorized under the US Copyright Act, Section 114 of which explicitly permits “soundalike” recordings, although the Ninth Circuit has avoided this stance thus far.   

Related

Perhaps most frustratingly for labels, companies may not have standing to maintain a cause of action based on the misappropriation of their artists’ voices, since the exclusive grants of rights made by artists to labels typically do not include exclusivity over their rights of publicity (just of their recorded performances). Artists will likely need to initiate these types of suits themselves.  

This divergence in rights between artist and label may also leave labels exposed. Can artists circumvent their exclusive recording agreements or re-record restrictions by releasing or licensing new versions of their existing recordings created by third parties using AI soundalike tools? Expect to see new clauses in recording agreements prohibiting artists from authorizing soundalikes, but this would not apply to catalog recordings.    

The rise of AI-generated soundalikes brings with it a range of legal, ethical, and practical challenges, reminiscent of the issues around synch rights and short-form user-generated content platforms. With the genie now out of the bottle, will the music industry embark on another endless game of Whac-A-Mole with takedown requests, pressuring digital service providers with new restrictions in licensing agreements and targeting providers of AI systems, or will they find a way to embrace the new technology? If the latter, we may see the emergence of voice recognition technology as a new form of content ID, allowing artists to monetize infringing performances.

One thing we know for sure: we are only beginning to see the potential for this technology to disrupt the music industry.

Nick Breen and Josh Love are both partners in Reed Smith’s Entertainment & Media Industry Group, focusing on the intersections of music, digital media, and emerging technologies, including generative AI. Breen leverages his expertise in regulatory, commercial, and copyright matters pertaining to music licensing, digital assets, NFTs, and video games, while Love has extensive experience representing artists and songwriters, handling high-value music catalog transactions, and navigating the intricacies of copyright law and the complex music rights and royalty landscape.

Marc Schneider

Billboard