The European Union Passed Its Artificial Intelligence Bill. Will It Enforce It?
Why is the European Union like Warren G?
They both have to regulate!
These days, that can be tough going.
In March 2024, the European Parliament passed the Artificial Intelligence Act, the first serious law that would regulate AI. Procedurally, this represented an enormous accomplishment — it’s not easy to pass legislation at the speed of innovation. Practically, like most legislation, the act represented a compromise, applying different levels of regulatory scrutiny to applications with different levels of risk. Broadly, it fell far short of perfect, but it was much better than the proposed 10-year moratorium on enforcing state laws affecting AI in the American “One Big Beautiful Bill Act.” (The act passed. This provision was eliminated. I am still unable to say the name of this law without giggling.)
Now all creators and rightsholders have to do is lobby the EU to enforce it — to “Stay True to the [AI] Act,” to quote the campaign that the recording business trade association IFPI and other rightsholder groups unveiled on July 15, complete with the inevitable but slightly ungainly #StayTrueToTheAct hashtag. At a time when technology companies already seem more powerful than governments, this is not a good sign.
The asks involved are simple and familiar: Transparency, consent and remuneration. The second two are at the heart of copyright — creators and rightsholders want to negotiate (consent) so they can get paid fairly (remuneration). But both of those could be difficult, if not impossible, without transparency. And now we get to the heart of the matter.
In Brussels, it sometimes seems like the U.S. technology business is the Wild West and the European Union is trying to bring a modicum of order to a lawless frontier. There’s more than a little truth to this. “Regulators, mount up!” to quote the sample of dialog from the 1988 Western Young Guns in the Warren G song — the startups are out of control again. Indeed, U.S. technology companies like to “move fast and break things” — to beg forgiveness rather than ask for permission. In the case of AI, it is assumed that technology companies have already ingested massive amounts of work in order to make their algorithms function. In some cases, this is pretty obvious, although it has never been confirmed.
Before the AI Act passed, European copyright law in some cases allowed for data-mining — ingesting copyrighted works for very limited purposes, unless rightsholders opted out. It’s not entirely clear which AI uses this covers, though, and most professional creators have a label, publisher or collecting society opt-out on their behalf. Based on what we know about generative AI music products, though, it appears that at least some technology companies ingested works controlled by rightsholders that had opted out. (It is unclear when and where the relevant copying was done.) In other words, technology companies have just gone ahead and done what they wanted, nevermind the law — business as usual in the U.S. but far less common in Europe. The guiding assumption is that this will eventually be solved by litigation or private deals that will make generative AI music a legitimate business.
Maybe.
Because using a lawsuit or legal settlement to legitimize the generative AI music business requires compensating rightsholders whose work was copied to train algorithms — which in turn requires knowing what was copied and how it was used. Once the generative AI business grows, hopefully, technology companies will be able to determine which works were used the most to create a certain output and credit and compensate creators accordingly. That’s why the AI Act required technology companies to track this — with provisions for transparency. Whatever future one imagines for AI, from making new #sleepwave music to going full fascist, it has to involve enough transparency to know which humans to credit — and, in some cases, blame.
The trouble is that the transparency requirements in the AI Act aren’t very specific. The idea was that companies would follow the spirit of the law, as well as its letter. There are fears this isn’t happening, though, partly because the European Commission, the executive branch of the EU, favors a more lax approach to AI. Perhaps, like other governments, it has concerns about “falling behind” in a technological field that presents serious national security issues. Perhaps there is U.S. pressure to lay off Silicon Valley. Perhaps both.
This isn’t the way to move forward, though. AI technology has the potential to change all sorts of industries, but ingesting albums by Dr. John and Dr. Dre isn’t going to improve medical technology. (And please don’t let medical AI listen to anything by Dr. Octagon!) The EU shouldn’t let the U.S. push it around, especially at a time when technology companies have so much influence in Washington, D.C. More important, the promise of AI is that we can build a digital media business that’s more fair to creators by crediting and compensating them — and loosening the record-keeping requirements would make this much harder.
Yes, the idea of lobbying the European Union to enforce its own law is a bit odd. But it’s also alarming. Although it’s hard to know how many people understand much about the AI Act, it was passed democratically by democratically elected lawmakers. We shouldn’t let it be undermined by technology companies — at least until AI takes over everything, anyway.
Dan Rys
Billboard