The Final Nail in the IP Coffin
The U.S. Supreme Court’s decision not to hear a case challenging the U.S. Copyright Office’s stance on AI-generated art has effectively cemented a legal reality: if a work is created solely by artificial intelligence, it cannot be copyrighted. The ruling—or lack thereof—closes a years-long debate that began when Stephen Thaler, a computer scientist and vocal advocate for machine creativity, sought copyright protection for an image generated by his AI system, the Creativity Machine. The Copyright Office repeatedly denied the application, asserting that copyright law protects only works of human authorship. Now, with the high court declining review, that position stands as settled law.
This isn’t just a procedural footnote. It’s a watershed moment for the creative economy, one that redefines ownership in the age of generative models. Artists, developers, and corporations have poured billions into AI tools capable of producing photorealistic images, music, and text in seconds. Yet under current law, those outputs exist in a legal gray zone—free for anyone to use, modify, or sell. The absence of copyright doesn’t just strip creators of legal recourse; it dismantles the economic incentive to refine and deploy high-fidelity AI systems in commercial contexts where exclusivity matters.
Human Touch, Legal Shield
The Copyright Office has long maintained that creativity requires a human author. That principle isn’t new—it’s rooted in centuries of Anglo-American jurisprudence. But the rise of diffusion models like Stable Diffusion and DALL·E has forced a collision between tradition and technological reality. Courts have consistently sided with the human-centric view. In 2023, a federal judge in Washington, D.C., upheld the Copyright Office’s rejection of Thaler’s claim, writing that ‘human authorship is a bedrock requirement of copyright.’ The Supreme Court’s silence now reinforces that precedent across the entire federal judiciary.
What’s often overlooked is how this standard applies to hybrid works—pieces where humans prompt, edit, or curate AI output. The Copyright Office has signaled that minimal human input, like typing a basic text prompt, isn’t enough to qualify for protection. But substantial editing, compositional choices, or iterative refinement might suffice. This creates a murky threshold: where does inspiration end and authorship begin? A photographer who uses AI to enhance lighting or remove artifacts may still hold copyright. But a designer who generates a logo with Midjourney and makes only minor tweaks likely does not. The line is arbitrary, inconsistently applied, and increasingly difficult to enforce.
The Market Consequences
The practical impact is already rippling through industries. Stock image platforms like Shutterstock and Getty Images have scrambled to license AI-generated content under restrictive terms, often prohibiting commercial use or requiring attribution to the platform itself—not the AI or the user. Meanwhile, open-source models flood the internet with high-quality visuals that anyone can replicate and redistribute. This undermines the value proposition of premium creative tools and destabilizes markets built on exclusivity.
Consider the implications for startups. A company building an AI-powered design suite can’t guarantee customers exclusive rights to the outputs. That makes it harder to attract enterprise clients who demand legal certainty. Venture capital firms are now scrutinizing AI art ventures with extra caution, wary of business models that rely on monetizing unprotected content. The result is a chilling effect on investment and innovation—not because the technology isn’t advancing, but because the legal framework hasn’t kept pace.
There’s also a deeper cultural cost. If AI-generated works belong to no one, they effectively belong to everyone. That sounds democratic in theory, but in practice, it erodes the notion of creative labor. Artists who spend years honing their craft now compete with algorithms trained on their work, often without consent or compensation. The lack of copyright protection for AI outputs doesn’t just affect machines—it reshapes the ecosystem in which human creators operate, devaluing originality in favor of algorithmic efficiency.
A System Stuck in the Analog Age
The current copyright regime was designed for a world where creation was slow, deliberate, and undeniably human. It assumes a clear author, a tangible medium, and a linear process from idea to artifact. None of those assumptions hold in the age of generative AI. Models like ChatGPT and Sora don’t ‘create’ in the traditional sense—they predict, interpolate, and recombine. Yet their outputs can be indistinguishable from human-made works. The law hasn’t adapted to this shift because it’s easier to default to precedent than to redefine foundational concepts like authorship and originality.
Some legal scholars argue for a new category of protection—something akin to a ‘neighboring right’ that grants limited exclusivity to AI operators or users, even if full copyright remains out of reach. The European Union has explored similar ideas, though no major jurisdiction has enacted them. Without such reforms, the gap between technological capability and legal recognition will only widen. The Supreme Court’s inaction may have settled the immediate question, but it leaves a larger one unresolved: how do we value creativity when the creator isn’t human?
For now, the message is clear. If you want legal protection, you need a human hand in the process. But as AI becomes more integrated into creative workflows, that requirement grows increasingly artificial—not in the technological sense, but in the philosophical one. The law is drawing a line in the sand, but the tide of innovation is already washing over it.