The Supreme Court’s refusal to take up Thaler v. Perlmutter has left a clear message in place: under current US law, works created entirely by artificial intelligence do not qualify for copyright protection without meaningful human authorship. The D.C. Circuit had already upheld the Copyright Office’s refusal to register an image generated solely by AI, and the high court’s decision not to intervene on 2 March 2026 leaves that ruling intact.

That matters well beyond the courtroom. The Copyright Office has been examining AI and copyright since early 2023, gathering more than 10,000 public comments after launching its inquiry and then publishing a two-part report series, including a January 2025 section focused on the copyrightability of generative AI outputs. Its position, reinforced by the courts, is that copyright still turns on human creativity, not on the machine that assembled the final work.

The practical distinction is between AI as a tool and AI as the effective creator. If a person uses generative systems to support a work but then applies substantial editorial judgment, rewrites the material or combines outputs into a distinctly human-curated expression, copyright may still attach to the finished product. But a simple prompt followed by direct publication is far less likely to meet the standard, because the law continues to require authorship by a human being.

For security leaders, the issue is no longer just legal theory. Companies are increasingly using AI to draft text, create images and produce other assets that they may later want to license, protect or enforce. If those materials are generated with too little human involvement, they may be harder to defend in a dispute, and a rival or infringer could potentially challenge ownership by pointing to the AI-heavy creation process. That makes AI use a matter of intellectual property governance as much as innovation.

The result is an expanded role for chief information security officers. Rather than standing outside the creative process, security teams may need visibility into how content is produced, whether prompts, edits and approvals are being documented, and whether so-called shadow AI is exposing the company to legal and operational risk. In that sense, the latest court ruling strengthens the argument that AI oversight belongs not only in legal and product teams, but in the broader security and risk function as well.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services