Skip to main content
Mythos

What the case is

The lawsuit is commonly known as Bartz v. Anthropic PBC, filed in 2024 by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson as a putative class action on behalf of other copyright holders. The core allegation is that Anthropic copied large numbers of books, including material obtained from pirate sources, to train its Claude AI models without permission or compensation.

What was alleged

The plaintiffs alleged two main forms of infringement: first, that Anthropic downloaded and used pirated digital copies of books to build and train its models; and second, that it retained a large internal library of those works for broader commercial use. Reporting on the case says the allegedly copied material included millions of books, with some accounts describing roughly 7 million copied book files. The plaintiffs also argued that this conduct harmed authors’ ability to earn money from their works and should be treated as unlawful copying rather than acceptable AI training.

How Anthropic defended itself

Anthropic’s main defense was fair use: that training an AI model on copyrighted books is transformative and legally permissible. A major court ruling in June 2025 reportedly accepted that the training use itself could qualify as fair use, but not all of Anthropic’s conduct was treated the same way. The court apparently distinguished between model training and the way Anthropic allegedly acquired and stored the books, which kept part of the case alive.

What the court found important

The reporting indicates the judge allowed the authors to continue pursuing a class action based on the allegedly pirated copies, even though the training theory was weakened by the fair-use ruling. In other words, the legal fight shifted from “can AI train on books at all?” to “did Anthropic unlawfully obtain and keep pirated books in the first place?”. That distinction mattered because it exposed Anthropic to potentially enormous statutory-damages claims if the infringement allegations were proved.

Settlement outcome

By late August and September 2025, Anthropic had reached a preliminary settlement with the authors, rather than taking the remaining claims to trial. Later reporting described the settlement as about $1.5 billion, making it one of the biggest AI copyright settlements discussed publicly. Public settlement materials also describe the case as a class action involving “claimholders” whose books were allegedly used without authorization.

Why it matters

This case became a test of whether training AI on copyrighted books is fair use and whether a company’s sourcing practices can independently create massive copyright liability. It also became a major reference point for future AI-copyright disputes because it separated lawful-style model training analysis from alleged piracy in data acquisition.

In plain English

The short version is: three authors sued Anthropic in a class action over alleged mass copying of books for AI training; the court largely sided with Anthropic on the fairness of training itself, but the piracy allegations remained serious enough that the case ultimately settled for a very large amount.

Contexts

Created with 💜 by One Inc | Copyright 2026