"Agentic" is the legal tech "blockchain-enabled"
Sara Pandey, Head of Marketing

"Agentic" is the legal tech "blockchain-enabled"
The word "agentic" is doing more work in legal tech right now than almost any other word in our marketing vocabulary. It's on landing pages, in product launches, in partner pitch decks. Gartner thinks most of it is bluffing: of the thousands of vendors describing their products as agentic, the firm's analysts estimate around 130 actually are. Andrew Ng, who coined the term in this technical sense, has said publicly that he didn't anticipate "a bunch of marketers would get hold of this term and use it as a sticker and put this on almost everything in sight."
The simplest way to explain agentic AI is the version a child could hold onto: an agent is a helper that figures out the next step on its own, instead of waiting for you to tell it. Everything else, the architecture, the benchmarks, the regulatory weight, follows from that one shift.
The distinction the industry glosses over
The cleanest technical version comes from Anthropic, the lab whose model powers a lot of the legal AI you're being sold, including Lucio. In a December 2024 engineering post that has become the de facto reference, Erik Schluntz and Barry Zhang draw a distinction the rest of the industry mostly glosses over. They call the broader category "agentic systems," and inside that category they separate two architectures:
Workflows are systems where language models and tools are orchestrated through predefined code paths.
Agents are systems where the language model dynamically directs its own processes and tool usage, deciding what to do next based on what it just learned.
Both are useful. They are not the same thing. And almost everything currently marketed as "agentic" in legal is the first kind.
Workflows vs Agents
A short version of the difference, in the cases that matter most for legal work:
Capability | Workflow (what most "agentic" legal AI actually is) | Agent (the genuine version) |
|---|---|---|
Who decides the next step | A human engineer, hardcoded in advance | The model itself, based on what just happened |
Tool selection | Predetermined sequence | Model picks the tool from a set |
Course-correction mid-task | Limited to predefined branches | Adapts when something fails or surprises it |
Memory across steps | Often stateless between calls | Holds context across many turns |
Time horizon | Seconds to a few minutes | Minutes to hours, sometimes longer |
Predictability | High; same input, similar path | Lower; the model takes initiative |
What you're trusting | The engineer's design | The model's judgment |
Why legal wants less autonomy, not more
This matters because of what happens when you put genuine agentic systems into legal work. The 2024 Stanford "Hallucination-Free?" study, later published in the Journal of Empirical Legal Studies, found leading legal research tools hallucinating at rates between 17% and 33%, even with legal-grade retrieval. And Anthropic itself has flagged that errors compound across an agent's steps. A 5% error rate per step compounded across ten steps comes out at roughly a 40% chance of at least one error in the chain. A single hallucinated citation in a research memo is recoverable. A drafting agent that quietly changes the redline in step seven because of a misread in step three is a different problem.
Which is why Joe Patrice at Above the Law made what is the most useful observation anyone has made about agentic AI in our industry. Reviewing Thomson Reuters' agentic AI launch last summer, he wrote that "the good news is that the agents that Thomson Reuters showed off appeared a lot less autonomous than the sales copy might suggest." Most industries want more autonomy from their AI. Legal, for now, wants less. Bloomberg Law's State of Practice 2025 survey of 750 lawyers found that more than half didn't know what AI agents were, and only about 5% had used one professionally. Vikas Srinath, a partner at Prospera Law, told Bloomberg that an agent that "could come in with a predefined set of rules and guardrails and effectively mimic 80% to 90% of our process would inherently kind of undercut the advice that we provide."
Where the regulators stand
The regulators are reading the same signal. The American Bar Association's Formal Opinion 512, issued in July 2024, puts the supervisory weight on the lawyer, not the system. California's proposed amendments to the Rules of Professional Conduct go further: a lawyer "must independently review, verify, and exercise professional judgment regarding any output generated by the technology," and the court has specifically directed the bar to consider guidance on agentic tools. In the UK, the Solicitors Regulation Authority approved Garfield.Law as the first AI-driven law firm in May 2025, but only because Garfield "is not autonomous and will only take a step where the client has approved it."
A maturity model for legal AI
A useful way to look at where your legal AI actually sits, borrowed loosely from Salesforce's maturity model and adapted for our context:
Tier | What it does | Legal example |
|---|---|---|
Generative assistant | Single-shot drafting or summarising on command | "Summarise this NDA" |
Workflow | Predefined multi-step sequence using language models | Most "agentic" legal products today |
Single agent | Model dynamically picks tools and steps in a bounded domain | Lucio Forge running diligence end-to-end on a data room |
Multi-agent system | Specialised agents collaborate, hand off, debate | Mostly demo-ware in legal as of 2026 |
Is it an agent?
So how do you tell, when you're being demoed something, where it sits? A short checklist, the kind you can run during a pitch:
Does it plan? Can it break a goal into sub-steps without you specifying them, or are you the one writing the sequence?
Does it choose its tools? When it needs to search case law, open a document, or query the firm's playbook, is the model picking, or is the path hardcoded?
Does it course-correct? When step two fails or surprises it, does step three adapt, or does the whole chain break?
Does it hold memory across steps? Does step seven know what happened in step one, or is each call effectively starting from scratch?
Does it operate over a horizon long enough that you could walk away? Seconds is a workflow. Minutes-to-hours, with dozens of tool calls, is an agent.
Does it know when to stop and ask? Real agents pause at the moments they're uncertain. Agent-washed tools hand back a confidently wrong answer at the deadline.
Can you observe what it did? Genuine agents log their decisions in a way you can audit. If the trace is opaque, that's a tell.
If a vendor's product fails most of those, what they have is a workflow with marketing.
Where agents actually earn their keep
Here is the part to be careful about, because the skeptics overcorrect. Workflows are not lesser. The Vals Legal AI Research benchmark found that AI tools now beat the human lawyer baseline on data extraction, document Q&A, summarisation, and transcript analysis. Most of that performance comes from well-designed workflows, not autonomous agents. A workflow that ingests a 47-document data room, runs your firm's M&A diligence playbook, flags the items that diverge from your standard risk thresholds, and surfaces a memo with citations, which is what Lucio Studio does today, is enormously valuable. It also happens to be exactly what your malpractice carrier wants you to buy, because the path is auditable and the lawyer is genuinely in the loop. The selective use of agents at the edges, for tasks like deep research across a firm's full case archive, is where the genuine agent layer adds something a workflow cannot.
The damage the buzzword does
The risk Joe Patrice flagged, the sharper point, is that the buzzword does damage in two directions. It scares conservative lawyers into dismissing tools that are actually quite good and quite safe. And it lures less conservative lawyers into trusting tools with more autonomy than the underlying technology has earned. The more than 1,300 court incidents involving AI hallucinations tracked in the database maintained by Damien Charlotin at HEC Paris are mostly the second category.
What 'agentic' should actually mean
The actual definition of "agentic AI" for legal work, the one to write onto the back of every vendor demo notebook, has three properties. It plans. It can course-correct. It knows when to ask. Anything that doesn't do all three is a workflow with better branding, and that's often exactly what your firm should buy. Just buy it for what it is. That precision, the willingness to call a workflow a workflow and an agent an agent, is what we built Lucio around: traceable, grounded, and honest about where the model is making decisions and where it isn't.
The thing worth watching is whether the people selling these tools are willing to describe them with that precision. The ones who can are the ones to start with.


