When humans act like bots, the wrong kind of intelligence is on trial
- Erwin SOTIRI
- 3 minutes ago
- 12 min read
How copy-paste lawyering makes the case for agentic AI — and what the EU AI Act says about the replacement
By Erwin Sotiri | Jurisconsul | March 2026

The illusion of productivity
Something peculiar is happening inside law offices across Europe and North America. A generation of junior and mid-level lawyers, equipped with large language model subscriptions, has adopted a workflow that looks efficient on the surface but is, on closer inspection, profoundly mechanical: prompt, receive output, paste into document, bill the client. The work is faster, certainly. Whether it is better is another question entirely.
The pattern is by now familiar to anyone supervising legal work product. A research memorandum arrives on time but reads with a peculiar uniformity. The sentence structures are smooth, the tone is confident, yet the analysis lacks the tension of genuine legal reasoning. Jurisdictional nuances are flattened. Qualifications that a careful lawyer would instinctively insert are absent. The hallmarks of LLM-generated prose are visible to the trained eye: a slightly anaemic quality, an absence of hedging, and an unsettling eagerness to please.[1]
This is not, in itself, a crisis. Tools are supposed to accelerate work. The crisis emerges when lawyers stop thinking alongside the tool and begin to think through it, when the act of lawyering collapses into the act of prompting. At that point, the human operator becomes, functionally, a relay station: taking input from a client, feeding it to a model, and returning the model’s output with little more than cosmetic adjustment. The irony should not be lost on anyone. When a £150,000-a-year associate performs work that could be automated end to end, they are not demonstrating the indispensability of human judgement. They are demonstrating its absence.
The acceleration nobody can ignore
The adoption figures speak for themselves. Corporate legal AI uptake more than doubled in a single year—from 23% in 2024 to 52% in 2025.[2]
Thomson Reuters reported that active generative AI usage among legal organisations rose from 14% to 26% year-on-year, with 78% of law firms agreeing that AI will become central to their workflows within five years. Perhaps the most consequential statistic: 64% of in-house legal teams now expect to depend less on outside counsel because of AI capabilities they are building internally.
The investment landscape mirrors this acceleration. The legal AI market doubled from $1.5 billion in 2024 to over $3 billion in 2025.[3] Harvey AI, the sector’s most prominent platform, reached an $8 billion valuation in December 2025, having raised over $760 million in a single year.[4] Legal tech funding as a whole hit $4.3 billion through November 2025, marking a 54% increase over the prior year.
And yet, alongside these figures, a more troubling data point has emerged. A recent survey found that 72% of respondents worried that junior lawyers could struggle to develop legal reasoning and argumentation skills, while 69% identified potential gaps in verification and source-checking abilities. Only 2% believed AI actually strengthens learning. One in-house lawyer observed, with commendable candour, that junior lawyers are missing the ability to access primary sources and verify AI output—skills that were historically acquired through the very tasks that AI now performs in their stead.
Copy-paste lawyering as self-inflicted obsolescence
Consider the workflow of a junior lawyer using a general-purpose LLM for contract review. The lawyer uploads a share purchase agreement, prompts the model with something vaguely instructive (“identify the key risk provisions”), receives a structured output, copies the result into a memorandum template, and submits it for review. The elapsed time: perhaps forty minutes. The cognitive investment: negligible.
Now consider what that same workflow looks like from the perspective of a firm’s management. The value that the junior lawyer has added is, at best, marginal. They have served as a conduit between a client’s document and a language model’s pattern-matching capability. They have not assessed whether the model’s output is jurisdictionally accurate. They have not stress-tested the analysis against the specific commercial context of the transaction. They have not even, in many cases, read the underlying agreement in full.[5]
The uncomfortable truth is this: every time a lawyer performs work in a manner that is indistinguishable from what an automated agent could do, they are making the economic case for their own replacement. They are proving, with each billable hour, that the task does not require human cognition. They are, inadvertently, training the organisation to see their function as a cost centre rather than a value centre.
Stanford’s research on legal AI hallucination rates (which found that even specialised legal tools produce false or misleading information between 17% and 33% of the time) is often cited as evidence that human oversight remains essential.[6] But the force of that argument weakens considerably when the “human oversight” consists of a cursory scan before pressing send. Most certainly, AI adoption risks encouraging laziness in research and analysis and eroding the critical thinking skills on which the legal profession depends.
The agentic leap: from assistants to autonomous operators
If generative AI gave lawyers a drafting assistant, agentic AI offers something categorically different: a virtual colleague capable of planning multi-step tasks, executing them across platforms, and refining its own output without continuous human prompting.[7]
What agentic systems can already do
The distinction between generative and agentic AI is not merely semantic. Where a generative model responds to a single prompt with a single output, an agentic system can decompose a complex instruction into sub-tasks, determine the sequence of execution, invoke different tools at each stage, and iterate on intermediate results before delivering a final product. In practical terms, an agentic legal AI system could receive an instruction to “review this vendor contract for GDPR compliance risks and prepare a redline with recommended amendments” and proceed, without further human input, to parse the agreement, cross-reference the relevant provisions of the GDPR and applicable national implementation, identify deficiencies, generate tracked-change amendments with explanatory annotations, and present the output for human sign-off.
Harvey AI, the sector’s flagship platform, already employs agentic workflows that use multiple AI models to catch and correct hallucinations in real time, with agents performing self-review, deeper research, and escalation to human experts when needed. Its “Words to Workflow” feature converts natural language instructions into automated multi-step legal processes without technical expertise. The platform is now deployed across 42% of AmLaw 100 firms.
Crucially, these systems are not merely faster. They are structurally different from the copy-paste workflow described above. An agentic system does not simply generate text; it reasons through a sequence of operations, applies domain-specific constraints, cross-checks its own output, and maintains an audit trail of its decision points. It is, in a meaningful sense, performing the cognitive work that the copy-paste lawyer has abdicated.
The economics: what agentic AI actually costs
Cost is the variable that transforms an interesting technology story into a structural shift. The current pricing landscape for legal AI is opaque by design. Most providers, Harvey chief among them, do not publish rates, but the available data permits a rough calculation.[10]
Harvey AI is estimated to cost in the region of $1,000 to $1,200 per lawyer per month at the enterprise tier, with 20-seat minimums implying an annual entry point of approximately $288,000. Previous reporting suggested a lower tier around $500 per lawyer per year, though this appears to predate the LexisNexis integration announced in June 2025, which is expected to push seat costs upwards by roughly a third. CoCounsel, Thomson Reuters’ competing offering, sits in the range of $110 to $400 per month per user. LexisNexis’ AI products and Legora occupy various price points beneath the Harvey ceiling.
For mid-market firms, the aggregate subscription burden is non-trivial. One estimate puts average firm-wide AI spending at $3,600 per month across fragmented subscriptions—$43,200 per year—without full workflow integration. Consolidation is underway, with the Harvey-LexisNexis alliance pointing towards bundled subscription models that collapse multiple licence lines into a single platform fee with an AI uplift.
The economic logic, however, is stark. A newly qualified solicitor in the City of London costs a firm in the order of £150,000 to £180,000 in salary alone, before employer’s national insurance, pension contributions, office costs, training, supervision time, and the inevitable cost of correcting errors. An agentic AI system capable of performing a material portion of that solicitor’s current workflow costs a fraction of that figure.
When the junior’s workflow is primarily copy-paste (assuming that are not exercising judgement that an agent cannot replicate) the arithmetic becomes unforgiving.
Regulating the virtual worker: what the EU AI Act demands
If agentic AI is to function as a virtual legal worker it must reckon with the most comprehensive AI regulatory framework in the world: Regulation (EU) 2024/1689, commonly known as the EU AI Act.[11]
Risk classification: where legal AI falls
The AI Act employs a tiered risk-based framework, imposing obligations that scale with the severity of potential harm. For legal AI, the critical provision is Annex III, point 8(a), which classifies as high-risk any “AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”[12]
A narrow reading of this provision might suggest that only AI systems deployed within courts or tribunals trigger the high-risk classification. However, the category heading( namely: “administration of justice and democratic processes”) and the broader policy rationale of the Act suggest that AI systems used in private legal practice for tasks that materially influence legal outcomes could, depending on their specific function, fall within or at the boundary of this classification. An agentic system that autonomously researches law, interprets contractual provisions, and produces legal advice for client consumption is performing precisely the functions described in Annex III, point 8(a), the only distinction being that the “authority” it assists is a private practitioner rather than a judge.
This boundary will be tested in the coming years. The European Commission was required to publish practical guidance with examples of high-risk and non-high-risk AI systems by February 2026, but reportedly missed that deadline. Harmonised technical standards, being developed by CEN and CENELEC, are not expected before the end of 2026. The Digital Omnibus package, proposed in November 2025, contemplates a conditional delay of certain Annex III obligations to no later than December 2027, pending the availability of adequate support measures.[13]
Obligations for providers and deployers of high-risk systems
If a legal AI system is classified as high-risk—whether under Annex III or by virtue of a provider’s own assessment—the AI Act imposes a demanding compliance regime.[14] Providers must establish a risk management system spanning the full lifecycle of the AI system. Training, validation, and testing data must meet quality criteria ensuring relevance, representativeness, and, to the best extent possible, freedom from errors. Technical documentation must be prepared to a standard sufficient for regulatory assessment. The system must support automatic event logging to facilitate traceability and post-market monitoring. Instructions for use must be provided to deployers in clear and comprehensible terms. The system must be designed for effective human oversight, including the ability for the assigned human to understand the system’s capacities and limitations, to correctly interpret its output, and to override or disregard its decisions.
Deployers (meaning us, the law firms) or legal departments that put the AI system into service, carry their own obligations. They must use the system in accordance with the provider’s instructions, assign human oversight to individuals with the requisite competence and authority, monitor the system’s operation for risks, and report serious incidents to market surveillance authorities. Where the deployer is an employer, Article 26(7) requires prior information and consultation of employee representative bodies before deploying a high-risk system in the workplace.[15]
Transparency, auditability, and the “show me your workflow” standard
Even where a legal AI system falls short of the high-risk threshold, Article 50 of the AI Act imposes transparency obligations that become fully applicable on 2 August 2026. Users interacting with an AI system must be informed that they are doing so a requirement with direct implications for client-facing legal work. If a law firm deploys an agentic system that drafts client correspondence or prepares legal opinions, the client may have a right to know that the output was generated, in whole or in part, by AI.
The emerging consensus among legal technology leaders is captured in a phrase attributed to David Silbert of DocuSign: “Show me your guardrails will increasingly mean show me your workflow.” The most successful deployments of agentic AI in legal practice will not be those that maximise autonomy, but those that constrain it through structured, logged, auditable processes with clear human intervention points. This is not merely good practice—under the AI Act, it is moving rapidly towards being a legal requirement.
Penalties for non-compliance
The AI Act’s enforcement regime is "punitive by design". Violations involving prohibited AI practices attract fines of up to €35 million or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with high-risk obligations can result in penalties of up to €15 million or 3% of worldwide turnover. For a mid-sized law firm, even the lower threshold represents an existential risk.[16]
The paradox: human lawyers must be more human, not less
Here is the central paradox. Harvard Law School’s Center on the Legal Profession found that none of the AmLaw 100 firms it interviewed anticipate reducing the headcount of practising lawyers, even as some report 100-fold productivity gains on specific tasks.[17] The reason is not sentimental attachment to the old ways. It is that the tasks which AI performs well, are precisely the tasks that generated the least defensible value in the first place. The tasks that remain strategic judgement, ethical reasoning, client counselling, courtroom advocacy, negotiation under uncertainty, are the tasks that require human cognition in its fullest sense.
The lawyer who responds to the agentic revolution by becoming more mechanical by retreating further into copy-paste routines is moving in exactly the wrong direction. The lawyer who responds by becoming more distinctively human by exercising the kinds of judgement, creativity, and contextual reasoning that no agent can replicate—is the one who will thrive. The legal profession has always been, at its core, a profession of judgement. AI does not change that. It makes it more urgent.
Sean Fitzpatrick, CEO of LexisNexis, has predicted that agentic AI will become embedded in the core operating model of leading law firms, shifting legal AI from handling single actions to coordinated, multi-agent systems managing complex workflows. If that prediction holds then the legal professionals who will be most valued are not those who can prompt a model efficiently. They are those who can do what the model cannot: think under conditions of genuine uncertainty, exercise ethical discretion, and bring the weight of human experience to bear on problems that resist algorithmic resolution.
Judging the right kind of intelligence
When humans act like bots, it does make you wonder whether we have been judging the wrong kind of intelligence all along. The legal profession’s instinctive defensiveness about AI, the insistence that “human oversight” makes everything safe rings hollow, when that oversight amounts to a five-second scan of machine-generated text. If the oversight is performative, it is not oversight. It is theatre.
Agentic AI is not a distant prospect. It is being deployed today, at scale, by the most sophisticated legal operations in the world. Its costs are falling. Its capabilities are expanding. Its regulatory framework (the EU AI Act) is crystallising, with high-risk obligations for legal AI systems taking shape even as the European Commission scrambles to finalise the implementing standards.
The question is not whether AI agents will perform legal work. They already do. The question is whether the lawyers who remain will justify their presence by exercising the judgement, creativity, and ethical responsibility that distinguish professional practice from mechanical processing. For those who have reduced their craft to copy-paste, the answer may be uncomfortable. For those who have used AI as it ought to be used as a tool that amplifies human capability rather than substituting for it, the future is extraordinarily promising.
The wrong kind of intelligence is the intelligence that does not think. Whether that intelligence is artificial or human is, ultimately, beside the point.
Disclaimer
This article is published for informational and educational purposes only. It does not constitute legal advice and should not be relied upon as such. The views expressed are those of the author and do not necessarily reflect the position of Jurisconsul or any of its clients. Readers with specific legal questions should obtain advice tailored to their particular circumstances.
[1]Jack Shepherd, 'Generative AI in the Legal Industry: Is Accuracy Everything?' (Medium, November 2024).
[2]ACC/Everlaw GenAI Survey 2025; Thomson Reuters, 'AI and the Legal Profession' (2025).
[3]Markets and Markets, 'Legal AI Market Report' (2025): market doubled from $1.5 billion (2024) to over $3 billion (2025).
[4]Harvey AI: $8 billion valuation as of December 2025, with over $760 million raised in a single year. Crunchbase, legal tech funding 2025.
[6]Stanford University, 'Legal AI Hallucination Study' (2024): legal AI tools hallucinate between 17% and 33% of the time.
[7]Bloomberg Law, 'Agentic AI Is the Hurdle Law Firms Must Clear in 2026' (November 2025).
[8]Gartner, 'Top Strategic Technology Trends 2025: Agentic AI' (October 2025).
[9]LexisNexis next-generation Protégé General AI deploys four specialised agents—an orchestrator, legal research agent, web search agent, and customer document agent—collaborating on complex workflows. Jones Walker LLP, 'Ten AI Predictions for 2026' (2026).
[10]Harvey AI reportedly commands over $1,000 per user per month; CoCounsel (Thomson Reuters) ranges $110–$400/month per user. See Aline.co, '7 Legal Tech Predictions for 2026' (January 2026).
[11]AI Act, Article 6(2) and Annex III. See also European Commission, 'AI Act—Shaping Europe’s Digital Future' (updated 2026).
[12]Regulation (EU) 2024/1689 (the AI Act), Annex III, point 8(a): 'AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.'
[13]European Commission, Digital Omnibus Package (19 November 2025), proposing conditional delay of certain Annex III high-risk obligations to no later than December 2027.
[14]AI Act, Articles 9–15 (risk management, data governance, transparency, human oversight, accuracy, robustness, cybersecurity).
[15]AI Act, Article 26(7): requirement to inform and consult employee representative bodies prior to deploying high-risk AI systems.
[16]AI Act, Article 99. Fines of up to €35 million or 7% of total worldwide annual turnover for prohibited practices; up to €15 million or 3% for non-compliance with high-risk obligations.
[17]Harvard Law School, Center on the Legal Profession (2025): none of the AmLaw 100 firms interviewed anticipate reducing headcount of practising lawyers.



