Something shifted in 2025 that cannot be walked back. OpenAI released Sora 2, and within five days, it had been downloaded over a million times. Users immediately started generating videos featuring Pikachu, Mario, Marvel characters, and likenesses of real actors – none of it authorised. The three major Hollywood talent agencies pulled their client rosters from the platform within weeks.
Disney, rather than suing, invested a billion dollars in OpenAI and licensed its characters for use inside Sora. Universal Music Group settled its copyright lawsuit against AI music generator Udio and announced a joint subscription service launching in 2026. Warner Music did the same with both Udio and Suno. Anthropic paid USD 1.5 billion to settle the largest AI copyright class action in history.
The Copyright Collision
The music industry moved fastest because it had the most to lose. When Suno and Udio launched in 2024, they created songs that closely resembled copyrighted recordings. This made label executives eager to call. By mid-2024, UMG, Sony, and Warner had filed suit. By late 2025, two of the three labels had settled and signed licensing deals, said a year-in-review from Copyright Alliance. The terms of those deals tell you where this is heading:
- UMG and Udio will launch a licensed subscription service in 2026 where users can create, stream, and share music generated from authorised catalogues. Artists opt in – not out – meaning the default position is that your work is not available to the AI unless you say otherwise.
- Warner settled with both Udio and Suno. Suno announced it would phase out its current model entirely and launch a new one in 2026, trained exclusively on licensed material.
- Sony has not settled. Its case against Udio continues, which means the courts will still get to weigh in on the fair-use question even as the commercial side moves toward licensing.
The shift from litigation to partnership happened because both sides did the maths. Labels realised that suing every AI startup into oblivion would not stop the technology – it would just push it offshore or underground. AI companies realised that training on pirated material exposed them to damages that could bankrupt them, as Anthropic’s USD 1.5 billion payout demonstrated.

Film and video: Sora, Disney, and the opt-in pivot
Video moved more slowly but hit harder. Sora 2’s launch in September 2025 produced footage realistic enough to make the question of AI-generated film unavoidable, according to an analysis published by Harvard Journal of Sports and Entertainment Law. Users created AI versions of copyrighted characters within hours. The Japanese trade group CODA, representing Studio Ghibli among others, demanded OpenAI stop using its members’ content. WME opted out of its entire talent roster.
OpenAI’s response was a strategic pivot. On October 4, CEO Sam Altman announced that Sora would shift from an opt-out model – where copyright holders had to file a form to remove their work – to an opt-in model with revenue sharing. The structure borrows from YouTube’s Content ID system: the platform identifies IP ownership and distributes revenue to rights holders proportionally.
The AI-and-entertainment intersection sits inside a broader pattern playing out across digital industries – the transition from scrappy, unregulated growth to structured monetisation with established players. Platforms oriented toward real-money digital entertainment now operate under frameworks that would have been unthinkable in the early days, and sites like Win Casino online casino function within licensing structures that govern everything from payout percentages to data protection. The browser-based casino model matured specifically because regulation forced it to, not because operators volunteered.
New entrants to these regulated platforms benefit from structured onboarding. The introductory play credit allows users to explore before committing funds – a mechanic that the AI music platforms are adopting with free tiers. The digital-asset casino niche caters to users who prefer cryptocurrency transactions for speed and privacy. The Win Casino entertainment library rotates formats regularly, covering table games and rapid-play categories. A welcome match on first deposit adds extra balance at sign-up, converting trial users into recurring ones.
What the Courts Have Actually Said
The legal picture is muddier than the commercial deals suggest. Two federal judges in San Francisco issued rulings in 2025 that pointed in different directions. Judge William Alsup described AI training as “quintessentially transformative” and sided with the defendant on a key fair-use factor.
Two days later, Judge Vince Chhabria ruled favourably for Meta in a similar case but warned that generative AI could “flood the market” with content and undermine the incentive structure that copyright law exists to protect, as reported by The420.in.
| Case | Parties | Status (early 2026) | What could it decide |
| NYT v. OpenAI / Microsoft | New York Times vs ChatGPT makers | Motion to dismiss largely denied (April 2025); proceeding | Whether scraping news articles for training constitutes fair use |
| Record labels v. Udio | UMG, Sony, Warner vs AI music generator | UMG and Warner settled; Sony’s case is ongoing | Whether AI music generation infringes reproduction and distribution rights |
| Record labels v. Suno | UMG, Sony, Warner vs AI music generator | Warner settled; others pending | Same core question as Udio, different defendant |
| Disney v. Midjourney | Disney vs AI image generator | Discovery phase; trial timeline extends into late 2026 | Whether AI-generated images of copyrighted characters constitute infringement |
| Bartz v. Anthropic | Class of authors vs Anthropic | Settled for USD 1.5 billion (December 2025) | Established that mass downloading of pirated works for training carries massive liability |
| Cameo v. OpenAI | Cameo vs Sora’s “Cameo” feature | Filed October 2025; early stages | Trademark dilution and consumer confusion in AI-generated celebrity content |
Legal experts cited by Copyright Alliance describe 2026 as potentially decisive. The cases still in litigation will produce rulings that either confirm fair use as a viable defence for AI training or narrow it significantly. Either outcome reshapes the economics of every AI company that touches creative content. The arguments on both sides deserve honest framing:
- The case for AI companies: training is transformative because the output does not reproduce any single work; it creates something new. Restricting training would stifle innovation and hand control of the technology to the few companies rich enough to license everything.
- The case for creators: the output may not reproduce a single work, but the model could not produce it without having ingested millions of works. The economic harm is real – if an AI can generate a song that sounds like Drake in thirty seconds, the market for human-made Drake-adjacent music shrinks.
- The case that neither side wants to make explicitly: the settlements suggest that both sides believe the legal outcome is uncertain enough that a negotiated deal is safer than a verdict.
None of these positions is wrong on its own terms, and the courts will likely produce rulings that validate parts of each argument without fully resolving the underlying tension. The commercial settlements already running in parallel suggest that the industry is not waiting for judges to decide – it is building the post-litigation structure while the cases are still open.

Where This Leaves Creators in 2026
The honest answer is: in a better position than they were in 2024, but not in a position anyone would call comfortable. The licensing deals allow artists to earn money when their work trains AI models. Also, the opt-in structure lets artists choose not to participate. This applies to platforms that honor the agreement. What remains unresolved:
- Independent artists without label backing have no seat at the negotiation table. The UMG and Warner deals protect catalogue artists. A bedroom producer doesn’t have a licensing deal. They also won’t get any class-action payout after someone took their tracks from SoundCloud.
- The opt-in model only works on platforms that adopt it. Open-source AI models using unlicensed data will remain available. They lack a terms-of-service page, so there’s no opt-out form for users.
- Revenue-sharing percentages have not been disclosed. How much artists earn will decide if these deals are true partnerships or just empty promises in press releases.
- The definition of “AI-generated” is blurry and getting blurrier. If a human writes a melody and an AI arranges it, who owns the result? If an AI makes a video and a human edits it frame by frame, is the final product considered human-authored for copyright? No court has ruled on these questions yet.
The tools available to creators who want to protect their work are still catching up to the speed of the technology. Options that exist today but remain imperfect:
- Content ID and fingerprinting systems – YouTube’s model is great for video and audio. However, there’s no similar system for AI training datasets. A song can be identified after it appears on a platform; it cannot be identified inside a model’s weights.
- Opt-in licensing agreements are for artists signed to UMG, Warner, and other participating labels. Independent creators have no comparable mechanism.
- Legal action works well in class actions, like Anthropic’s USD 1.5 billion settlement. But it’s too costly for individual artists without support from institutions.
- Watermarking and provenance tools are new technologies. They add invisible signatures to creative works. This helps track how these works are used in AI systems. Not yet widely adopted but backed by the Coalition for Content Provenance and Authenticity (C2PA).
The creative industries don’t have to choose between accepting AI or rejecting it. They are in a detailed negotiation. They must decide who owns what, who gets paid, and how much control the original creator keeps after the algorithm learns from their work. That negotiation started in courtrooms and moved to boardrooms. Where it ends depends on who has the power to set the terms. If the creators have leverage, they can make the terms stick. But if not, the ones writing the checks will decide.
