AI and IP: What Lawyers Need to Know Before the Next Client Asks

AI and IP: What Lawyers Need to Know Before the Next Client Asks

Introduction

The legal profession's relationship with artificial intelligence has reached an inflection point. No one’s waiting for the law to catch up. Not the clients, not the platforms, and definitely not the engineers feeding trade secrets into off-the-shelf tools just to meet a deadline. If you're still treating AI as a thought experiment, you're already behind. The questions lawyers are asking about AI and IP aren't academic anymore. They’re urgent, and they’re showing up in court.

From authorship and inventorship to infringement by training and output, the legal system is already struggling to keep pace. But that doesn’t mean your clients have to. The rules may still be grey, but your guidance shouldn’t be.

Here’s what lawyers are asking. And what they need to know.

Who Owns the Code When the Machine Wrote It?

Ask ten lawyers what they think about AI and IP, and you’ll get ten different versions of “it depends.” But quietly, over the last year, some of those edge-case hypotheticals have stopped being hypothetical. And that’s when lawyers start paying attention.

You’ve seen the shift. Clients want generative tools to speed up content creation, draft patents, or run prior art searches. Firms want to plug it into ops. Courts, meanwhile, are already being asked to decide who owns what, who’s liable, and what happens when a machine crosses a legal line no one anticipated. In the background, companies like OpenAI and Stability are already facing multi-million-dollar case (Getty Images v. Stability AI) that will likely shape how the next decade of IP law unfolds.

This isn't a theory anymore. Its product liability, authorship, contract risk, and data governance rolled into one complex challenge. The questions lawyers are asking right will ultimately define competitive advantage for their clients.

Let’s unpack the critical ones. Quickly, but properly.

Is AI-generated work copyrightable, and if so, who owns it?

The U.S. Copyright Office has drawn a clear line in the sand: no human author, no copyright (Copyright Registration Guidance – Works Containing Material Generated by AI). This position crystallized in Zarya of the Dawn, where the Office refused protection for portions of a comic book generated by Midjourney. The claimant argued she selected, arranged, and edited the outputs; infusing the ‘creative spark’. The Copyright Office held firm. Generative tools, it determined, don’t leave sufficient room for original intellectual conception by a human author.

The UK’s position is murkier. In the UK, computer-generated works without human authors can still be protected, albeit for a shorter period - 50 years versus the standard term of 70 years after the author’s demise. India follows a similar model in principle, though enforcement clarity remains limited.

More tellingly, industry practice is already racing ahead of legal frameworks. If a film studio uses AI to revise a script, is the original screenwriter still the author? Studios like Eros International are reportedly using AI to assist with script revisions, raising unresolved questions about creative attribution and contractual rights. The risk extends far beyond ownership uncertainty. It’s dilution of credit, editorial control, and enforcement power. If your client’s product relies on AI-generated user-facing content, who owns the IP? Who can license it? Who’s on the hook if it infringes? Without copyright protection, that content enters the public domain immediately, and is freely available for competitors to appropriate.

The strategic answer isn’t in the black letter. It’s in the contract. That’s where the real battles are going to be fought. If your client is commissioning AI-generated content, the contract must explicitly address authorship, licensing rights, usage restrictions, and liability boundaries. Without it, you’re left arguing over default positions that don’t yet exist. Courts won’t retroactively fill these gaps. Those who write the rulebook first will have the advantage.

What about AI-generated inventions? Can they be patented?

This one has gone to court more than once. In Thaler v. Vidal (US) and Thaler v. Comptroller-General (UK), the courts were asked whether an AI system called DABUS could be named as an inventor on patent applications. Both courts said no. The inventor must be a natural person. The European Patent Office DADUS decision reached the same conclusion.

Why does this matter to you? Because while clients love the idea of machine-driven innovation, the patent system doesn’t. Companies increasingly rely on AI to assist in the inventive process, to draft claims or generate inventions. If that assistance crosses the line into autonomous invention, the company may be sitting on unprotectable IP. The moment that invention enters the public domain, it can be copied, adapted, or built around by competitors.

Even in jurisdictions that allow broad interpretation of inventorship, the practical takeaway is the same. Maintain clear, traceable human involvement behind every inventive claim. AI can accelerate the inventive process, improve it, but probably can’t lead it. The patent application should tell the story of human inventorship, not a sophisticated AI tool; that is if your client wants strong legal protection.

What if the AI was trained on infringing data?

This one is a minefield. Creators are already suing over it.

OpenAI and Meta are facing class actions alleging their models were trained on copyrighted books scraped from the internet. Getty Images sued Stability AI for training Stable Diffusion on millions of licensed images. The claim is simple. If the model learned from protected content and your client uses that model commercially, they might be indirectly infringing. If the training wasn’t authorized, fair use probably won’t save them when the use is commercial and affects market demand for the original works.

We saw this play out in The New York Times v. OpenAI, too. The Times alleges that OpenAI’s model reproduced nearly identical chunks of protected content. Meanwhile, Silverman v. OpenAI and similar class actions challenge the use of copyrighted books in training datasets. They’re not suing for breach of license. They’re suing for copyright infringement. That should make every in-house team pause. Especially when their developers are casually pulling from GitHub, Stack Overflow, or using APIs they barely understand.

Lawyers are asking the right question here. Can you audit the dataset? And the real answer is this. Not unless you built the model yourself.

What happens if AI outputs infringe? Who gets sued?

The user? The vendor? The AI company?

If your client is using ChatGPT or Copilot or some off-the-shelf AI to generate contracts, content, or technical docs, and one of those outputs copies someone else’s IP, who’s responsible?

You’ve probably already added disclaimers and indemnity clauses to your contracts. Smart. But the smarter move is understanding where the risk really sits. If you’re relying on third-party AI tools and you can’t verify their training data, the indemnity you got might be worth less than you think. If your client is the one offering the tool, they need to spell out use cases, disclaim ownership, and actively monitor output.

This is no longer hypothetical. In March 2024, a California design firm was sued for allegedly using AI-generated renderings that copied an architectural design from a rival. The firm's defense: they had no knowledge the AI was copying existing designs. The court didn’t care. Neither did the insurer.

Where does trade secret law come in?

Right here.

The moment an engineer or in-house counsel uploads confidential data to an external AI tool, the protection may be gone. If you can’t prove reasonable efforts to maintain secrecy, you’ve lost it.

A leaked prompt. A code snippet pasted into a chatbot. A misconfigured integration.

You already know how little it takes for a trade secret to evaporate. AI tools that retain user inputs or use them for model training present exactly this risk. What many teams don’t realize is that even internal AI models, especially those connected to cloud services, need strict technical and contractual safeguards. Governance matters. Especially in industries like pharma, finance, and telecom, where trade secrets form the bulk of competitive advantage.

What now?

Most of your clients don’t need to overhaul their IP strategy overnight. But they probably need to reframe AI as more than a productivity tool. It’s a new actor in their legal ecosystem. One capable of generating value, risk, and liability in the same stroke.

You don’t need to fear it, but it’s helpful to govern it.

Here’s where that starts:

  • Get clarity on authorship and inventorship. Don’t let the machine drive unless a human’s name is on the line.

  • Treat training data as a material risk. Assume nothing unless the model’s provenance is verified.

  • Contract defensively. Whether your client is the provider or the user, allocate risk and disclaim liability properly.

  • Monitor outputs. AI doesn’t know what it’s copying. But you’re expected to.

  • Protect secrets like it’s 1995. Because AI will leak them faster than a disgruntled employee.

The law isn’t settled. But the stakes are. If your client creates, licenses, or relies on intellectual property, this is already your problem.

Not next year. Now.