Anthropic Claims Fair Use in Push to End Music Publishers Lawsuit

Young N' LoudIn The Loop1 hour ago7 Views


Anthropic music publishers lawsuit

Photo Credit: Planet Volumes

Does training generative AI models on protected lyrics without permission constitute fair use? After years of debate – and a couple court rulings – the question is taking center stage in Anthropic’s new push for summary judgment in a copyright lawsuit filed by Concord and others.

On the heels of a $25 billion investment, the Claude developer just recently made its own summary judgment motion official. Technically, the plaintiff publishers, among them ABKCO, Universal Music, and the aforesaid Concord, are likewise seeking summary judgment on certain counts.

Keeping the focus on Anthropic’s new filing, the motion clocks in at a healthy 41 pages but, unfortunately for us, contains significant redactions in its public version. Nevertheless, the above-mentioned question – which is, of course, factoring into all manner of AI copyright suits – is front and center.

“Anthropic’s use of lyrics to Publishers’ 499 songs (along with billions of other copyrighted works) to train Claude is ‘transformative,’ fair use,” the document begins, proceeding to invoke the court’s findings in a copyright suit levied against Anthropic by authors.

That same suit resulted in a huge settlement for the plaintiffs – albeit because of Anthropic’s alleged book piracy as opposed to its training Claude on protected materials sans permission. As we reported last year, the presiding judge expressed the belief that this process fell squarely under the fair use umbrella.

(Another case, filed against Meta by authors and ultimately dismissed, is also heavily cited in Anthropic’s motion. However, the judge drove home that “this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.” An amended suit is underway.)

Now, Anthropic is looking to build on the finding by framing the purported fair use training as both authorized and encouraged under the Copyright Act.

“Claude, in short, is the kind of ‘new idea[]’ that the Copyright Act not only allows, but ‘encourages’ as a ‘necessary counterbalance’ to prevent ‘stifl[ing] the very creativity which that law is designed to foster,’” the text reads.

“Allowing copyright holders to veto such a transformative technology would ignore the Supreme Court’s mandate that the primary objective of copyright is not to reward the author, but to serve the public,” according to Anthropic.

Unsurprisingly, then, Anthropic is also presenting its “transformative technology” – referring to Claude’s role in accelerating clinical trials, being used “to further conservation,” and more – as an all-or-nothing breakthrough that couldn’t exist without free-for-all training.

“A holding that LLM training on copyrighted material is not fair use would threaten these applications and a galaxy of others,” the filing spells out.

As highlighted, this case will directly impact the dozens of different complaints submitted by rightsholders against AI giants – especially if Anthropic emerges victorious.

But a few counter-arguments are unique to the music publishers’ claims; the defendant is emphasizing the availability of song lyrics on the internet, the publishers’ alleged failure to demonstrate revenue-related harm, and Claude’s guardrails against reproducing lyrics verbatim.

“In the sample of 5 million prompts and outputs produced during discovery from a six-month period pre- and post-suit, over 83% of the prompts that resulted in reproduction of lyrics to the works in suit were generated by Publishers themselves or their agents and often in an attempt to circumvent Claude’s guardrails or simply trick them,” another section states.

Furthermore, “[i]t makes no sense for a user to pay Anthropic for a Claude account and then work to jailbreak its guardrails to obtain song lyrics that are available for free on the internet.”

(That Claude can and does pump out “original” song lyrics based on its training data – meaning human-penned lyrics – doesn’t receive quite as much attention in the motion. However, Anthropic believes that “competition from people using Claude to create original song lyrics is legally irrelevant.”)

Running with the idea, as Anthropic sees things, rendering it “directly liable for a third party’s efforts to circumvent copyright law would create perverse incentives for content owners to manufacture infringement (as Publishers’ agents repeatedly did here).”

Speaking of service provider liability for users’ alleged infringement, it probably won’t come as a shock that the publishers have put their contributory and vicarious claims to rest in the wake of Cox v. Sony.

(“On the evening before the filing deadline for this brief,” reads Anthropic’s account of the partial dismissal, “Publishers informed Anthropic that after two and a half years of litigation, they ‘do not intend to proceed’ on their claims for contributory and vicarious infringement.”)

Finally, regarding Anthropic’s alleged removal of copyright management information (CMI) in violation of the DMCA, the fair use defense is once again the central focus.

“Publishers do not even try to show that Anthropic removed CMI, or distributed works that had been stripped of CMI, for any purpose other than fair use,” according to Anthropic. “Under Publishers’ own theory, Anthropic removed CMI from their lyrics to train Claude. But Anthropic’s copying of Publishers’ purported lyrics to train Claude is fair use. That forecloses DMCA liability here.”

With that, all eyes are on the case’s scheduled July 15th summary judgment hearing – and, in the bigger picture, exactly how much mileage Anthropic gets out of its fair use defense.

A closing question: If training AI models on protected materials is actually fair use, what does the point mean for developers that ink licensing deals?

Said question has become a big part of the majors’ (less Warner Music) copyright action against Suno. Following Warner Music’s settlement and licensing pact with the AI music generator, Universal Music and Sony Music are pushing to obtain a copy of the agreement via discovery.

“For nearly two years,” they vented in a recent filing opposing the magistrate judge’s rejection of the discovery request, “Suno has supported its fair use defense in this litigation by arguing that no viable market exists for licensing sound recordings as training data for generative AI models.

“Yet, in November 2025, Suno entered into the very type of agreement it claims is not feasible—a licensing deal with Warner Music Group to use copyrighted sound recordings as training data for new generative AI models,” Sony Music and Universal Music continued.



Join Us
  • Linked in
  • Apple Music
  • Instagram
  • Spotify

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Loading Next Post...
Follow
Search Trending
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...