A federal judge has granted preliminary approval to a settlement requiring Anthropic to pay $1.5 billion over claims that it trained its large language model on books it allegedly
downloaded from piracy sites.
U.S. District Court Judge William Alsup in the Northern District of California has not yet issued a written order spelling out his reasons for
allowing the deal to move forward.
The proposed settlement is expected to result in payments of $3,000 per book to authors whose work was allegedly unlawfully downloaded.
If finalized, the settlement will bring an end to a lawsuit dating to last August, when the authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson alleged in a class-action
complaint that Anthropic built its business on "largescale copyright theft."
The artificial intelligence company allegedly digitized books it had purchased, and also downloaded
digital copies of books from piracy sites, in order to train its large language model.
advertisement
advertisement
Anthropic previously argued to Alsup that copying books for training purposes is
protected by fair use principles -- regardless of whether the books were purchased legally or downloaded from piracy sites.
Alsup only partially agreed.
He said in a landmark ruling issued in June that Anthropic did not infringe copyright by
digitizing books it had purchased, and then using them to train the chatbot Claude.
"The use of the books at issue to train Claude and its precursors was exceedingly
transformative and was a fair use," Alsup wrote.
But he also said Anthropic was not entitled to claim fair use regarding allegations that it downloaded millions of books in two
libraries -- LibGen and PiLiMi -- containing pirated material.
Initially, Anthropic sought to appeal that ruling, arguing that a different judge in the same federal district --
Vince Chhabria -- largely sided with Meta in a
similar lawsuit. Chhabria said in that matter that Meta did not infringe copyright by downloading pirated books and using the content to train the large language model Llama. (That matter is still
pending in front of Chhabria.)
Last month, Anthropic halted its attempt to appeal and said it had agreed to settle the case.