In 2025, two U.S. court decisions, Kadrey v. Meta and Bartz v. Anthropic, have provided the first real judicial answers to a pressing question, can using copyrighted works to train large language models (LLMs) amount to fair use?
Judges Vince Chhabria and William Alsup, in the Meta and Anthropic cases respectively, both found that the use of LLMs for training purposes constitutes a transformative use. In legal terms, that means the works are not being copied for their original expressive purpose but are instead converted into raw data that allows an AI system to learn patterns of language.
Key takeaway from the U.S. cases
These cases indicate that the U.S. courts are willing to view LLM training as fair use, provided two conditions are met:
- the use is genuinely transformative, and
- the training does not displace the market for the original works.
In Meta, the absence of measurable harm to book sales or licensing opportunities proved decisive. Judge Chhabria left the door open for future plaintiffs with better evidence of market impact, but the principle that transformative training can fall under fair use was clear.
Anthropic reached a similar conclusion but highlighted risks around how training data is collected and stored. While training itself was seen as transformative, retaining copies of works for other purposes created potential liability. Together, the two rulings suggest that U.S. developers have room to train AI models on copyrighted works, but only if market harm is minimal and data practices are carefully managed.
Why this matters for Australia
Australia takes a very different approach. Unlike the U.S., it does not recognise a broad fair use exception. Instead, the Copyright Act 1968 (Cth) provides a limited set of purpose-specific fair dealing exceptions, which apply only in defined contexts such as research or study, criticism or review, parody or satire, news reporting, and a few other narrowly prescribed uses.
Training an AI model on copyrighted books without permission would almost certainly fall outside these exceptions. Even if the training were transformative in the U.S. sense, an Australian court would likely treat it as infringement. That makes the risk profile for local developers far higher.
Policy debate in Australia
The Productivity Commission has recognised this challenge and, in its 2025 Interim Report: Harnessing Data & Digital Technology, proposed introducing a text-and-data mining (TDM) exception. Introducing such a reform would align Australia more closely with the U.S. model by permitting the use of copyrighted material for AI training without requiring a licence.
The proposal has already sparked strong cultural and industry backlash. Authors, music bodies, and creative organisations including the Australian Society of Authors, Australia Recording Industry Associate, and Australasian Performing Right Association and Australasian Mechanical Copyright Owners Society have stressed that an open-ended TDM exception would legitimise uncompensated use of Australian creative works.
While the Commission’s proposal is consultative in nature, it signals that Australia is actively reconsidering whether the existing fair-dealing régime is fit for an AI economy.
Lessons for developers
- Do not assume Australian courts will follow the U.S. rulings, our copyright framework is significantly narrower and offers far less latitude for unlicensed use.
- Market impact remains a critical factor. Even in the U.S., fair use will likely turn on evidence of market harm. This is likely where future litigation will focus.
- In Australia, the prudent course of action is to obtain licences, document data sources, and stay engaged in the policy debate that will shape future reforms.
Conclusion
The Meta and Anthropic rulings show that U.S. judges are open to treating LLM training as transformative and, in the right circumstances, protected by fair use. However, Australia’s stricter framework and strong cultural resistance mean that unlicensed training here remains high risk. The recent Productivity Commission report suggests Australia may be starting to question whether its current fair dealing rules are suitable for the age of AI. Unless and until the law changes, developers in Australia should assume that licences are required and that U.S. fair use arguments are unlikely to succeed in this jurisdiction, for now.