How is this AI thing going to affect us lawyers?
This is one of the topics I have spent the most time reading and talking about over the past year.
Two weeks ago, I chatted with Sergio Maldonado on this in his podcast.
Sergio’s is a very notable opinion. His perspective is privileged: he has been in the digital space from the beginning, he is in the US -in all the cream- and is an early-definer (more than adopting).
There’s no need to see more than the legal services platform he launched: todolaw.com
The topic has much to offer and I believe it can affect you, not just the lawyer hanging there (on your team). So I will leave here the ideas that seem most relevant to me and where I got them from.
I already warn you that some of the loudest ones come from myself...
You are reading ZERO PARTY DATA. The newsletter about current affairs, tech-polies and law of Jorge García Herrero and Darío López Rincón.
In the free time this newsletter leaves us, we specialize in solving complicated moves in personal data protection. If you have any of those, give us a hand with your little finger (contact us). Or contact us by email at jgh(at)jorgegarciaherrero.com
Thanks for reading Zero Party Data! Sign up!
1.- How it started: “MUAHAHA I am un-justifiable”
I know you have never seen the little scene I am going to describe, but I have seen it frequently.
It’s the one of a lawyer (doesn’t matter the age, eh?) who one day tried free ChatGPT asking him a complicated question -normally the one he sweated to solve- about a subject he had to study last week.
The prompt was garbage and the AI’s response, another one: a generic response with two hallucinations that an intern wouldn’t miss them (or couldn’t spot).
The lawyer smiled at the reassuring disaster, punched his chest with both fists and roared “IN YOUR FACE, SAM ALTMAN” “WE SHALL OVERCOME!!” “NEITHER IA NOR IO!!!”, “CHOI UNJUSTIFIABLE MUAHAHA!!!”.
Invariably the day of that ‘definitive scientific test’ was before November 2025, because otherwise another rooster would have crowed (it wouldn’t be relevant anymore).
Well, I have good and bad news for that lawyer. Bad ones, mostly.
...How it’s going
Last November, AI made a qualitative leap. Since then, gross errors and erratic documents that we were used to are not detected anymore. Or hardly.
But I’m not here to talk about what works well now, but about the expected changes around the new technology.
AI will not be limited to automating legal tasks: it will drastically change value capture in the legal profession, law firm structure, client relationship and the meaning of what we have come to call ‘the practice of Law’.
Such is the convergent conclusion of “Reshuffle” by Sangeet Paul Choudary, and the paper “Some Simple Economics of AGI“, by Christian Catalini, Xiang Hui and Jane Wu (February 2026), the two main sources of post-brick that are prepared to, at best, look on diagonally.
I know them.
2. Democratization of knowledge…
Both Reshuffle and Economics of AGI reach the same conclusion by different paths: AI turns the deep knowledge component of the professional into a commodity. And that will have an impact on... the price.
Reshuffle identifies three restrictions around which systems are organized —scarcity, risk and coordination—.
When AI liquidates the scarcity of deep knowledge (because legal research, case synthesis and regulatory analysis turn out much faster -and today practically free, we’ll see tomorrow-), the economic value is kept by those who manage the other two restrictions.
... versus “skin in the game”
When access to knowledge is no longer scarce, the lawyer’s value resides in applying judgment and strategy and in assuming and managing risk.
Scarcity gone, fees must be justified elsewhere and measured differently, not by hours.
A lawyer who searches, digests and regurgitates legal knowledge is performing an easily automatable task and therefore, fungible.
A lawyer who helps a decision-maker do their job better, to take one direction or another with relevant consequences —assuming responsibility for said advice— operates in a context and offers a service that will be out of AI’s reach for a long time.
I believe.
In a world (this one you inhabit today) where Claude can retrieve and synthesize regulations and case law notably in seconds, the lawyer’s value resides in the ability to look their client in the eye and tell them ‘do or stop doing this’ assuming the consequences of the advice.
Nothing new under the sun: what has and always had added value is the application of judgment in conditions of uncertainty and under professional liability regime.
Any AI will pull out a good draft note in minutes with the sanctioning consequences of doing this or that thing.
But what has always been valued -and will be more every day- is the professional capable of pointing in the most practicable or better defensible direction, with or without that note in between.
Different thing is that before ‘knowledge’ and ‘judgment’ were billed together. Now it will be to justify billing in that quality sometimes so easy and other times so difficult to explain, which makes that once you smell a really good lawyer you don’t want to separate from him (or her) even with forceps.
3. The crisis of the billable hour…
Specification of the above: an AI compresses the elaboration of a research report from 40 hours to 60 minutes of supervised ‘Deep research’ generation.
Not a free AI, of course, but yes a system fed with the correct precedents, with its weaknesses and strengths studied and attended to, based on a fifth iteration skill, which is asked and whose result is supervised in an adequate way.
In this scenario, the criterion of fees based on ‘billable hour’ becomes absurd. For two reasons:
Economics of AGI highlights this issue from the perspective of demand, predicting that income models derived from ‘knowledge work’ will pass from ‘Software-as-a-Service’ to ‘Software-as-a-Work’: monetizing results instead of access or time.
Reshuffle points to supply: we will witness a fundamental restructuring of law firm economy. Law firms that cling to billable hour won’t just be outdated: they will undervalue their own comparative advantage. If AI drafts in minutes, charging for those minutes will be a catastrophic example of ‘race to the bottom’.
... and the boom of judgment, experience and strategy
Charge for judgment, strategy, expert verification and responsibility that make that draft really executable and useful?
There will be the premium billing.
Nothing new under the sun: that was always there: in the key intervention of the critical meeting, in the executive summary of five bullets, never in the forty-page boring report that no relevant person is going to ever read .
Certainly there will be uncomfortable meetings with clients: before the natural trend (and most justified) of reducing fees due to evident time optimization derived from AI use, it will have to put value on cognitive load and responsibility linked to the new way of working:
On one hand, orchestrating and refining AI agents’ work and supervising their outputs is exhausting, as anyone who does it can testify.
On the other hand, if with one prompt to Claude or ChatGPT you already think you have everything you need to make an important decision... Why do you ask me to validate it? Specifically why precisely to me?
While the answer to this last question is correct, you will survive the AI tsunami.
4. Professional liability: unexpected moat or competitive moat
Reshuffle contains a totemic phrase that develops throughout most of its pages: ‘Tools amplify performance, but solutions absorb risk’.
AI tools ‘specialized’ (in our case: legal) like Harvey or Legora have several obvious problems and one deep one.
In the most obvious part, they are nothing more than wrappers with advanced RAG shielding. Said plainly, they are a layer of customization painted over the antepenultimate frontier model from OpenAI or Anthropic: you are paying a meadow per month for a model from months ago -which are decades in the current crazy AI context-.
These frontier model wrappers are specially designed to capitalize lawyers’ risk aversion (they aren’t usually very techies but what they are is very fearful of professional liability due to hallucinations).
And that’s good. But listen well lawyer: AI flattens and homogenizes knowledge. And that coin has two sides:
The mediocre professional, whose work is below average will see their level boosted until their mediocrity reaches the industry average.
The excellent one must be able to identify the content in which their difference is concretized, articulate it and train with it AI so that it multiplies them, instead of equalizing them downwards.
The problem, needless to say, is that by training ChatGPT with your best content, you are helping a guy so trustworthy and with such good intentions as Sam Altman, or any of his bro friends: people who have been preaching for years that their invention will take your job away.
But the key is in responsibility.
Harvey can dispatch an impressive legal document. But remember that you cannot sue Harvey for making up a ruling. It cannot be sanctioned by a bar association. Harvey cannot sign an opinion. It does not have professional liability insurance.
It is the lawyer whose signature, whose body interposes between AI’s output and consequences for the client will represent in a few years the most scarce economic function, the bottleneck of the system.
Few can ignore that curves are coming for all job profiles who have lived producing routinely identical papers among themselves.
However, professionals -and lawyers- specialized in high-risk situations —complex corporate operations, novel regulatory environments, complex litigation— will retain their pricing capacity much longer.
5. From “collectors” to “verifiers”
Economics of AGI predicts a collision between two cost curves: the automation cost curve that descends exponentially and the verification cost curve, which will remain constant, because it is limited by our very modest human capacities.
What does this mean?
As you well know if you are “in it”, AI can generate a 50-page contract, a comparison between regulations of five countries, a summary of a stone-hard paper and a presentation summary, all in half an hour.
As we say, today the price paid by users of frontier models is close to zero. Tomorrow we’ll see.
But someone has to verify that all those outputs are correct, complete and -fundamentally- relevant for the situation and aligned with the client’s real intention.
That verification will remain expensive, slow, limited by human limitations.
The trap of the “looking good report”
The legal text generated by AI will always be formally impeccable, structurally solid and materially incoherent in aspects that an experienced lawyer will detect.
Nothing new under the sun: that is what has always happened with juniors’ drafts, including the smartest ones: immaculate notes, without a single spelling mistake, -let’s assume without hallucinations-, but irrelevant, off-track or away from what the client was clearly asking us for, or manifestly needed or expected (even without explicitly asking).
The trick: Critically, “Economics” holds that using AI to verify AI is very dangerous: “it generates false confidence” because the verifier and generator share the same blind spots.
The author of Reshuffle proposes the sandwich theory: the professional defines what is needed, AI generates a draft and the professional reappears to verify and ensure a correct and relevant result.
Verification competition will be much more important than production one.
This not only includes detecting hallucinations, but especially where AI ‘didn’t get it’, the gap between extracted information and client context, the important question they didn’t ask you...
That is, distill flat information into immediately actionable value advice.
Combine with verification capacity, quarter and half of applied psychology to know if what your client needs is that they back you up, convince them otherwise or simply get out of their way, and you will have the lawyer profile who has always been successful, with or without AI. Again: nothing new under the sun.
Of course the combo formed by (i) a flashy report (ii) without review or what is the same, (iii) reviewed by an overworked Executive is already giving us days of glory.
This will go further. Much further
6. “Above” and “Below” the algorithm
Reshuffle divides workers into two categories: those who design, build and exploit AI systems are above the algorithm.
Those whose work is managed, standardized and evaluated by automated systems are below.
Above the algorithm the professional contemplates ecstatically what they can do with their new superpowers.
Lawyers who use AI as ‘engine’ to redesign legal service provision —creating new workflow architectures, developing proprietary processes of AI-augmented advice or building verification infrastructure— will be above the algorithm.
Lawyers who remain clinging to routine and standardized tasks by AI (document review, basic contract drafting, regulatory checklists) will find themselves increasingly below it, competing in speed and cost with AI and with a competition as commoditized as them.
Yes: a cruel “uberization” of the profession is coming for “the ones below”.
Whoever stays below loses agency and pricing capacity.
It seems inevitable the thinning of templates from large structures.
And the precariousness of survivors whose only function, once workflows are automated, is to review AI-generated outputs
And watch out that by ‘large structures’ I not only refer to established firms, but also and especially to flexible lawyer hiring platforms for projects, capable of sheltering technical professionals without commercial skills for weeks, absorbing their expertise in form of deliverables or ‘skills’, reducing their work to quantifiable metrics, optimizing and paying only what clients can actually measure (and glups, qualify), and dispensing with the rest, including ethereal concepts like what we call empathy, good vibes, spark, talent, room reading or genius.
Watch out, buddy, some of these players could even try to charge you for accessing your own know-how, conveniently aggregated, regurgitated and packaged for the occasion... inside or outside a proprietary AI.
‘It is happening’.
... but watch out for putting all your, erm, eggs in the same basket.
One of the elephants in the room is the AI bubble.
Large companies are subsidizing massive technology adoption, literally losing money with each one of our prompts.
A viralized article after the closure of Sora points to a price increase for tokens between x10 and x100, by end of 2027.
It may not be that much (x100), and it may not be so soon, but it will happen.
This is another movie we have seen many times before. It’s called “enshittification“.
Today we can already know that there won’t be enough competitors to see real price wars. And that the adjustment will reconfigure -again- the whole professional landscape when it happens.
Watch out for passing to depend completely on providers who can discontinue your favorite model, impose an inflation of x100 in euros (or deflation of 1/100 in tokens) from one month to another.
Have you fired too many people trusting low token prices?
Amodei will come with the bill.
Have you already forgotten how to do this and that because your wonderful swarm of agents did everything for you?
Say goodbye to your sweet profits.
“If you cannot build a viable business with AI costs 5 times higher than current ones, you don’t have a viable business: you have a ‘subsidized’ project.”
7. Not everyone can be a great artist, but a great artist can come from anywhere
Reshuffle: AI drastically reduces the common minimum multiplier to provide professional services.
If a lone wolf legal with Claude with a set of well-designed workflows can produce the same volume and quality of work as a team of five associates and two paralegals, the economic justification of large firms wobbles in a lot of practice areas.
This is a threat and an opportunity (but not for the same ones):
Consolidated firms face the classic “innovator’s dilemma“: their size, overhead and legacy processes turn into liabilities for these purposes (instead of assets).
But for individual lawyers, and especially the youngest ones, the entry barrier has never been so low.
Read the vision of “niche lawyer” of Jordan Furlong: I’m not going to explain it as well as him.
That one person attends to a specific clientele with AI-augmented precision and minimal overhead is something perfectly viable today.
... on the other hand, it has never been so easy as today to lose your head working all day alone at home, crossing your judgment -or simply talking- only with AI agents.
... or step into deep trouble by entering unexplored territories riding on AI’s shoulders, for not having anyone beside you with whom to cross your judgment minimally.
8. “Let’s calm down”
The (today still huge) gap between AI hype and legal reality imposes a bit of skepticism towards extremes:
The legal world is firmly anchored to the most conservative structures of Western society. Everything that can change slowly in this context will do so with minimum revolutions: it will happen at glacial speed.
You just have to see all those living and jumping procurators in times of Lexnet. Or the amazing scene of a pharmacist cutting a barcode with a cutter and sticking it with cellophane on a piece of paper. Amazing.“AI“ is an extremely generic term that encompasses very different realities. Changes will wipe out entire industries before others even start to worry.
Each generation has had a technological crisis: maybe ours is more pronounced but none have been apocalyptic.
The correct takeawy of all this is not, I think, complacency, but remaining skeptical both towards ‘nothing will change for me’ and to ‘everything is coming down’.
Lawyers should decipher both ‘agendas’: both those who hold ‘AI is just a tool, nothing will change’ and the apocalyptic narrative of ‘lawyers are already cured meats’.
The sensible median would be in evaluating AI’s capabilities according to the Best Available Human (BAH) standard of Ethan Mollick, instead of facing the ideal lawyer, which by the way doesn’t exist.
It is true that AI today surpasses the average lawyer’s capacity in some things, but much will have to row Sam Altman before automating little things like justice administration.
Do you remember that so funny feeling the first time they introduced your credit card data on the internet for the first time? That won’t be anything compared to filing a completely automated lawsuit in an important matter. That will be butterflies in the stomach of real eagle size.
9. It’s the system, stupid!
Reshuffle insistently insists that we are asking the wrong question. ‘What can AI automate?’ instead of ‘What new systems will emerge coordinated by AI?’.
The service restructuring that gives title to the book captures a more ambitious reality than task-by-task analysis.
AI will not only change what we lawyers do; it will change how we organize ourselves, provide services, regulate ourselves, lend and value our services. And it will do so at the same pace as the ecosystems to which we provide services.
“Economics” coincides with its framework of ‘cascading coordination’: each solved coordination problem will unlock the next layer of complexity.
The first-order effect of AI on law is steroid-document generation.
The second-order effect will be (already is) workflow restructuring.
The third-order effect will be new firm structures, pricing, client relationships.
Later there will come new regulatory frameworks, new competitive dynamics between tools and solutions and native legal AI disruptors that today we don’t reach (I can’t) to imagine.
Lawyers who limit themselves to asking ‘How do I use ChatGPT to draft contracts faster?’ are optimizing a system that has days counted.
The question is: Where do I want to be in a post-generalization AI world? .
10.- We leave for another day…
‘The juniors, the juniors, is anyone thinking about the juniors?‘ and
‘Teaching skills to Claude or not teaching skills to Claude, that’s the question: the prisoner lawyer dilemma‘
Jorge García Herrero
Lawyer and DPO










