The judge accepted the fact that Anthropic prevents users from obtaining the underlying copyrighted text through interaction with its LLM, and that there are safeguards in the software that prevent a user from being able to get an entire copyrighted work out of that LLM. It discusses the Google Books arrangement, where the books are scanned in the entirety, but where a user searching in Google Books can’t actually retrieve more than a few snippets from any given book.
Anthropic get to keep the copy of the entire book. It doesn’t get to transmit the contents of that book to someone else, even through the LLM service.
The judge also explicitly stated that if the authors can put together evidence that it is possible for a user to retrieve their entire copyrighted work out of the LLM, they’d have a different case and could sue over it at that time.
If you violated laws in obtaining the book (eg stole or downloaded it without permission) it’s illegal and you’ve already violated the law, no matter what you do after that.
If you obtain the book legally you can do whatever you want with that book, by the first sale doctrine. If you want to redistribute the book, you need the proper license. You don’t need any licensing to create a derivative work. That work has to be “sufficiently transformed” in order to pass.
The LLM is not repeating the same book. The owner of the LLM has the exact same rights to do with what his LLM is reading, as you have to do with what ever YOU are reading.
As long as it is not a verbatim recitation, it is completely okay.
According to story telling theory: there are only roughly 15 different story types anyway.
How they don’t use same words as in the book ? That’s not how LLM works. They use exactly same words if the probabilities align. It’s proved by this study. https://arxiv.org/abs/2505.12546
FIrst, it’s a very new article with only 3 citations. The authors seem like serious researchers but the paper itself is still in the, “hot off the presses” stage and wouldn’t qualify as “proven” yet.
It also doesn’t exactly say that books are copies. It says that in some models, it’s possible to extract some portions of some texts. They cite “1984” and “Harry Potter” as two books that can be extracted almost entirely, under some circumstances. They also find that, in general, extraction rates are below 1%.
Yeah but it’s just a start to reverse the process and prove that there is no AI. We only started with generating text I bet people figure out how to reverse process by using some sort of Rosetta Stone. It’s just probabilities after all.
That’s possible but it’s not what the authors found.
They spend a fair amount of the conclusion emphasizing how exploratory and ambiguous their findings are. The researchers themselves are very careful to point out that this is not a smoking gun.
Yeah authors rely on the recent deep mind paper https://aclanthology.org/2025.naacl-long.469.pdf ( they even cite it ) that describes (n, p)-discoverable extraction. This is recent studies because right now there are no boundaries, basically people made something and now they study their creation. We’re probably years from something like gdpr for llm.
Make an AI that is trained on the books.
Tell it to tell you a story for one of the books.
Read the story without paying for it.
The law says this is ok now, right?
No.
The judge accepted the fact that Anthropic prevents users from obtaining the underlying copyrighted text through interaction with its LLM, and that there are safeguards in the software that prevent a user from being able to get an entire copyrighted work out of that LLM. It discusses the Google Books arrangement, where the books are scanned in the entirety, but where a user searching in Google Books can’t actually retrieve more than a few snippets from any given book.
Anthropic get to keep the copy of the entire book. It doesn’t get to transmit the contents of that book to someone else, even through the LLM service.
The judge also explicitly stated that if the authors can put together evidence that it is possible for a user to retrieve their entire copyrighted work out of the LLM, they’d have a different case and could sue over it at that time.
Sort of.
If you violated laws in obtaining the book (eg stole or downloaded it without permission) it’s illegal and you’ve already violated the law, no matter what you do after that.
If you obtain the book legally you can do whatever you want with that book, by the first sale doctrine. If you want to redistribute the book, you need the proper license. You don’t need any licensing to create a derivative work. That work has to be “sufficiently transformed” in order to pass.
The LLM is not repeating the same book. The owner of the LLM has the exact same rights to do with what his LLM is reading, as you have to do with what ever YOU are reading.
As long as it is not a verbatim recitation, it is completely okay.
According to story telling theory: there are only roughly 15 different story types anyway.
As long as they don’t use exactly the same words in the book, yeah, as I understand it.
How they don’t use same words as in the book ? That’s not how LLM works. They use exactly same words if the probabilities align. It’s proved by this study. https://arxiv.org/abs/2505.12546
I’d say there are two issues with it.
FIrst, it’s a very new article with only 3 citations. The authors seem like serious researchers but the paper itself is still in the, “hot off the presses” stage and wouldn’t qualify as “proven” yet.
It also doesn’t exactly say that books are copies. It says that in some models, it’s possible to extract some portions of some texts. They cite “1984” and “Harry Potter” as two books that can be extracted almost entirely, under some circumstances. They also find that, in general, extraction rates are below 1%.
Yeah but it’s just a start to reverse the process and prove that there is no AI. We only started with generating text I bet people figure out how to reverse process by using some sort of Rosetta Stone. It’s just probabilities after all.
That’s possible but it’s not what the authors found.
They spend a fair amount of the conclusion emphasizing how exploratory and ambiguous their findings are. The researchers themselves are very careful to point out that this is not a smoking gun.
Yeah authors rely on the recent deep mind paper https://aclanthology.org/2025.naacl-long.469.pdf ( they even cite it ) that describes (n, p)-discoverable extraction. This is recent studies because right now there are no boundaries, basically people made something and now they study their creation. We’re probably years from something like gdpr for llm.
The “if” is working overtime in your statement