Agentic law in the European Union: Governing autonomous AI agents
- Erwin SOTIRI
- 2 hours ago
- 16 min read
I came up with the expression “Agentic law” that could be framed as a proposed legal field focused on regulating the behaviour of AI agents as operational actors in digital and physical environments, rather than regulating human behaviour mediated by AI.
In practical terms, it targets systems capable of initiating actions (for explicit or implicit objectives), adapting post-deployment, interacting with other systems, and affecting outcomes without step-by-step human direction, which aligns with the EU’s baseline definition of an “AI system” as operating with varying autonomy and potentially adaptiveness after deployment.
The main governance problem is not whether an agent is a legal person. EU policy work on AI civil liability has explicitly rejected legal personality for AI systems and instead points to liability allocation across the value chain (builders, deployers, controllers of risk), coupled with proportional rules that deal with opacity, connectivity and autonomy. The core structural challenge is that existing EU regimes were largely drafted for (i) products, (ii) services, or (iii) human decision-making contexts, while agents increasingly behave as delegated executors: triggering transactions, selecting counterparties, optimising across objectives, coordinating in multi-agent settings, and producing emergent behaviour.
Why there is a need to regulate Agentic behaviour ?
AI agents are autonomously executing price-fixing cartels and acting as confused deputies for attackers, and our global regulatory frameworks are still treating them like glorified autocomplete. Agentic AI represents a structural shift in how organizations use artificial intelligence, moving from systems that merely generate insights to autonomous actors that plan, call tools, and execute multi-step workflows directly within live business processes.
The difference is execution.
Traditional AI governance was built for models that produce outputs, where human operators review predictions and decide the next move. Agentic systems introduce a radically different operating model by acting inside live environments, wielding operational authority to initiate transactions or update records without waiting for human confirmation. As these agents scale in capital intensity and handle immense inference demand, the distance between their intended utility and absolute chaos becomes dangerously thin.
This isn't a theoretical future; behavioural risk is already evidenced in litigation and enforcement. We are seeing automation pipelines produce legally significant effects while remaining completely opaque to the affected persons. Look at the CJEU SCHUFA case, where the court confirmed that an automated probability score playing a decisive role in a credit refusal actually constitutes an automated "decision" under the GDPR. Or look at the Dun & Bradstreet case, where courts had to balance the demand for "meaningful information" about algorithmic logic against corporate trade secrets. You build a massive architecture to optimize your annual recurring revenue, and then you get hit with a regulatory rug pull because you cannot explain your agent's reasoning path.
Here's the thing: When multiple agents operate within the same environment, their actions intersect in ways that were never explicitly designed, triggering emergent multi-agent effects that can completely divorce system-level behavior from the intent of any single agent. Market-structure harms are already evidenced through algorithmic coordination. In the UK CMA enforcement case regarding the online sales of posters and frames, two sellers deployed automated repricing software to seamlessly implement and monitor a non-undercutting cartel. We literally have machines vibe working their way into anti-competitive strategy at scale. The OECD has explicitly warned that algorithms challenge traditional antitrust concepts and complicate the detection of collusion. The value is real the chaos is real and the distance between them is the width of a well-written specification.
So what does this tell us about where AI is heading?
It tells us the threat surface has fundamentally mutated. Tool-using agent architectures—models trained to decide which tools to call, when to call them, and with what parameters—transform these systems into operational actors rather than pure predictors. Cyber-security evidence shows autonomous agents create “confused deputy” risks by design. Security researchers have demonstrated that "indirect prompt injection" is a highly practical exploit class. An attacker embeds malicious instructions into a seemingly benign webpage; the agent retrieves it, parses the text as arbitrary code, and suddenly your highly capable enterprise agent is hijacked to exfiltrate data or trigger unauthorized actions. The adversary poisons the context window, exploiting emergent zero-day vulnerabilities built into the very premise of autonomous execution.
We used to worry about a chatbot giving a bad answer. Now we worry about it quietly draining a corporate bank account.
Now let me be honest about where it works and where it doesn't. You might assume the European Union's aggressive tech oversight has this handled, but the EU legislative evolution itself evidences massive gaps. The European Commission’s AI Liability Directive proposal explicitly recognized that AI opacity creates extreme proof barriers, yet its subsequent withdrawal leaves a gaping void in non-contractual liability for AI harms. The revised Product Liability Directive (PLD) modernized strict liability and evidence disclosure, but it functions primarily ex post to address product defects—it is not an operational governance code for autonomous behavior. The AI Act mandates lifecycle duties for high-risk systems, but it fails to supply horizontal rules for delegation credentials, machine-to-machine contracting, or tamper-evident behavioural logs for autonomous action systems outside of strict high-risk silos. Even the GDPR, the crown jewel of privacy law, focuses on personal data processing and does not create a general regime to govern agent actions that impact market manipulation, property, or cyber-physical safety. There simply isn't enough meat on the bone in the current regulatory stack to handle the attention economics and operational risks of delegated artificial agency.
If you're sitting here thinking 'Oh my gosh this is too much It is coming too fast.' You're not alone.
But the infrastructure is shifting underneath you, and hoping legacy laws will protect your enterprise is a failing strategy. Establishing runtime operational control, mapping identity boundaries, and enforcing continuous authorization aren't luxuries—they are table stakes. You need to act before your own autonomous agent decides to rewrite your market compliance strategy on a Tuesday afternoon.
What is needed to put in place an Agentic law system ?
A rigorous “Agentic law” programme in the EU would likely bundle five policy moves:
Codify agency attribution and authority for machine-executed actions: when an agent’s acts bind a principal, when an agent’s access token is treated as delegation, and how to handle error, spoofing, authority limits, and revocation.
Align liability and evidence around agent operations: strict or quasi-strict liability for defined “high-autonomy, high-impact” deployments; legally mandated, tamper-evident logging and audit trails; and evidential presumptions to address information asymmetry, building on the EU’s modernised Product Liability Directive approach to disclosure and presumptions in technically complex cases. 3
Integrate safety-by-design and cybersecurity-by-design into “agent operation”, using the AI Act’s high-risk requirements (risk management, record-keeping/logs, human oversight) as the internal-market baseline, and the Cyber Resilience Act’s vulnerability-handling discipline (including coordinated vulnerability disclosure, SBOM-style component identification, CE-marking pathways for software) as the resilience baseline.
Operationalise cross-border supervision and interoperability (including when outputs are “used in the Union” even if the provider is outside), consistent with the AI Act’s territorial reach. The EU also now participates in an international legally binding framework via the Council of Europe AI Convention, and has authorised EU signing, providing a clear international coordination vector.
Create market-integrity guardrails for agent-to-agent competition and collusion risk, complemented by the EU’s competition framework and the Court’s confirmation that privacy-law compliance may be a relevant “clue” in competition analysis, with mandatory cooperation between regulators where appropriate.
A plausible 3–5 year route (2026–2030) is: near-term soft-law and standardisation for agent logging, delegation credentials, safety cases, and incident reporting; medium-term sectoral overlays (finance, health, critical infrastructure); and, if fragmentation/persistent harms emerge, a horizontal “Agentic Systems Regulation” as an internal market measure under Article 114 TFEU, with governance anchored in the AI Act architecture and cybersecurity certification ecosystems.
For law firms and advisors, the opportunity is a new, tool-supported compliance discipline combining AI regulatory engineering, product compliance, data protection, cybersecurity, consumer and competition. The service model will skew toward subscriptions/retainers and compliance-as-a-service, with embedded testing, continuous monitoring, and evidential readiness rather than one-off legal opinions.
The Foundations of Agentic Law in EU Regulatory Frameworks
The current, binding EU baseline is that AI systems are regulated as market objects/activities, with obligations imposed on providers and deployers, and with explicit attention to autonomy, adaptiveness, and downstream influence on environments. The AI Act also already anticipates cross-border operation by applying to third-country providers/deployers where output is used in the Union. Its staged applicability is fixed: general provisions and prohibitions apply earlier, with general application from 2 August 2026 and further staging for specific obligations.
On auditability and operational control, the AI Act requires high-risk systems to support automatic logging over the lifetime of the system, and links logging to traceability, post-market monitoring, risk identification and oversight. It requires human oversight measures commensurate with risk, autonomy, and context of use. These are foundational building blocks for “agentic” regulation because they directly target behavioural traceability, lifecycle governance, and controllability.
On product liability and evidential asymmetry, the revised Product Liability Directive explicitly clarifies that software (including AI systems) is a “product” and that suppliers can be treated as manufacturers, while distinguishing software from mere information content. It introduces court-ordered disclosure of relevant evidence when claim plausibility is shown and empowers confidentiality protections for trade secrets. It also mandates presumptions of defectiveness and/or causation where proving is excessively difficult due to technical or scientific complexity. The Directive applies to products placed on the market or put into service after 9 December 2026, and Member States must transpose by 9 December 2026. Its evaluation clause expressly includes attention to the availability of product liability insurance.
On cybersecurity and secure operations, the Cyber Resilience Act establishes a harmonised regime for “products with digital elements”, with vulnerability handling, notification, coordinated vulnerability disclosure, and supply-chain visibility via component identification documentation (including SBOM concepts). It also operationalises CE marking for software-form products and links certification processes to essential cybersecurity requirements. These provisions are directly relevant to agent deployments that act autonomously because agent behaviour is constrained by the integrity of code, dependencies, updates, and vulnerability-handling processes.
On data protection and transparency, the GDPR constrains automated decision-making and reinforces transparency rights. The Court has clarified that automated scoring that plays a determining role in an outcome can itself be a “decision” producing legal or similarly significant effects under Article 22 GDPR. The Court has also reinforced access rights under Article 15(1)(h) GDPR, requiring “meaningful information” about the logic involved and balancing this against trade secrets and the rights of others on a case-by-case basis (rather than blanket exclusion). The endorsed guidance on automated decision-making/profiling provides an interpretative baseline that remains widely used by supervisory authorities.
On agency/personhood, EU policy has moved away from granting legal personality to AI systems: a 2020 resolution on a civil liability regime for AI states it is not necessary to give legal personality to AI systems and that required changes should start with clarifying that AI systems have neither legal personality nor human conscience, emphasising liability allocation across the value chain controlling risk. Earlier Parliament work on robotics included discussion of special status concepts, but later liability-focused policy narrowed toward risk and control allocation rather than personhood.
On international alignment, the first legally binding international AI treaty has been opened for signature under the Council of Europe Framework Convention on AI and human rights, democracy and the rule of law, and the EU has adopted a Council Decision authorising signature on the Union’s behalf, citing internal-market legal basis logic in conjunction with treaty conclusion procedures. Complementary global baselines include the updated OECD AI Principles, explicitly positioned for interoperability and responsiveness to general-purpose AI developments.
A critical “negative” fact for roadmap design is that the Commission’s 2022 proposal for an AI Liability Directive (focused on non-contractual liability and evidentiary measures) shows as “proposal withdrawn” in the legislative procedure record, signalling that near-term “Agentic law” development will more plausibly build on the AI Act, the revised Product Liability Directive, and sectoral frameworks rather than reviving that exact instrument.
To reach cross-border outcomes in litigation and enforcement, the EU’s private international law instruments remain central: Brussels I Recast for jurisdiction and enforcement in civil and commercial matters, and Rome I/Rome II for applicable law in contractual and non-contractual obligations.

Assumptions and working definitions
Jurisdiction is assumed EU-wide. Cross-border and international enforcement aspects are treated as open where they depend on Member State private law, third-country recognition, and treaty participation.
“AI agent” is used here as a functional subset of AI systems: a machine-based system that:
(i) operates with some autonomy,
(ii) can adapt after deployment, and
(iii) infers how to generate outputs that influence environments, but additionally
(iv) is configured to initiate actions (digital or physical) through tools, permissions, workflows or actuators.
The definition is anchored to the AI Act’s concept of an “AI system” and adds an action-oriented criterion for “agentic” scope.
“Agentic behaviour” means the sequence of actions and interactions an agent performs to progress toward objectives, including tool invocation, message passing, state changes, and transaction initiation. This framing is designed to be legally operable because the AI Act already treats lifecycle control, logs, and oversight as compliance objects;
“Agentic law” would extend these into behavioural governance for action-taking systems.
“Autonomy threshold” is treated as a regulatory trigger concept rather than a metaphysical claim. Consistent with EU policy rejecting AI personality, the threshold determines when additional obligations attach to operators and providers (not to the system itself).
A four-level working autonomy model (illustrative, for legal design) is assumed:
Level A: recommendation-only (no external actions).
Level B: action-execution with per-action approval.
Level C: bounded autonomy (pre-authorised actions within scope, budgets, and time).
Level D: open-ended autonomy (self-directed planning and tool use across dynamic contexts).
The model is intended to map onto risk and controllability concepts already present in EU AI governance (risk management, foreseeable misuse, human oversight, logs).
The purpose and objectives of Agentic law
Liability and accountability for autonomous action. The central liability problem for agents is behavioural opacity coupled with delegated execution: the immediate “actor” is software, but legal responsibility must be allocated to humans and firms controlling risk. EU liability policy explicitly recognises that opacity, connectivity and autonomy can make tracing harmful actions to specific human inputs difficult; the proposed solution direction is to assign liability to persons in the value chain who create, maintain, or control the risk. The revised Product Liability Directive further supports this direction by treating software as a product and by introducing disclosure and presumptions that address technical complexity, reducing the risk that victims are uncompensated because evidence is inaccessible.
Concrete example.
An agent with delegated payment authority executes a series of purchases triggered by adversarial prompts or compromised toolchains. The behavioural cause is a sequence of tool calls; the legal question is whether the operator, provider, or integrator is liable, and what evidence is required to prove defectiveness, causation, or failure of oversight. This maps directly to the AI Act’s log and oversight requirements for high-risk systems and to PLD disclosure/presumptions for complex systems.
Recommended principle.
Single-point “operator” accountability for authorised action + recourse along the value chain. Agentic law can borrow the structural idea of making the party best placed to manage operational risk the primary defendant, while enabling contribution/recourse and contractually structured allocations, consistent with joint and several concepts in the PLD framework.
Statutory language snippet (illustrative)
“‘Agent operator’ means the natural or legal person who deploys an agentic system and defines or controls its permissions, budgets, or operational constraints. The agent operator shall be liable for harm caused by actions taken by the agentic system within the scope of such permissions, subject to rights of recourse against other economic operators responsible for defects or non-compliance.”
Agency attribution and contractual binding effect. Agents increasingly “act” in the market: placing orders, negotiating terms, selecting counterparties, and triggering execution. EU consumer and digital content rules allocate responsibilities in digital supply contexts (conformity, remedies, information duties), but they do not provide a horizontal rule-set for when machine-executed actions are attributed to principals, especially when agents interact with other agents. Cross-border, Rome I and Brussels I Recast magnify the need for predictable attribution because contract formation and forum selection can be contested when actions are executed by agents operating across jurisdictions.
Concrete example.
A consumer-facing agent enters a subscription contract on behalf of a user after interpreting ambiguous instructions. Consumer law may treat the trader’s pre-contract information duties and fairness of terms as key, but there remains a “who consented/authorised” problem where the parties dispute authority.
Recommended principle.
Attribution by cryptographic delegation and reasonable reliance. Define a “delegation credential” (identity, scope, time, revocation) and treat the agent’s acts as acts of the principal where the counterparty reasonably relies on valid delegation within scope.
Statutory language snippet (illustrative).
"An action taken by an agentic system shall be attributed to the principal where the action is authenticated by a valid delegation credential issued or authorised by the principal, and where the counterparty reasonably relies on that credential as indicating authority within its stated scope."
Autonomy thresholds and controllability. The AI Act already defines AI systems by autonomy and adaptiveness and varies obligations by risk category. Agentic law would specify autonomy thresholds because the legal risk turns on the system initiating actions with material effects (financial, safety, fundamental rights, cybersecurity). This aligns with the AI Act’s express requirement that human oversight measures be commensurate with the risks and the level of autonomy.
Concrete example.
A multi-agent workflow for recruitment or credit evaluation uses autonomous sub-agents to gather data, rank candidates, and trigger rejections. The GDPR and the Court’s Article 22 jurisprudence show that “decisions” can crystallise at earlier stages where automated scoring is determinative, which makes autonomy thresholds particularly relevant in “decision pipelines”.
Recommended principle
Operational autonomy triggers enhanced duties. For systems at Level C/D, impose mandatory safety-case documentation, continuous monitoring, stronger logging granularity, and explicit kill-switch requirements.
Safety, transparency, evidence, and auditability. Agentic systems require forensic readiness because disputes will turn on what the agent did, when, using which tools, and under which constraints. The AI Act requires record-keeping/logs for high-risk systems, and links logs to risk, post-market monitoring and oversight. The revised Product Liability Directive directly addresses disclosure of evidence and empowers courts to presume defectiveness/causation in complex cases. The cyber resilience framework requires structured vulnerability handling, coordinated disclosure, and supply-chain visibility, supporting agent auditability through secure updates and component traceability.
Concrete example. In a dispute over wrongful denial of a service (for example, telecom subscription denial based on credit scoring), the Court has emphasised meaningful access to logic information and case-by-case balancing against trade secrets. Agentic operations add a second layer: toolchain and prompt provenance, and whether the system’s behaviour conformed to declared constraints.
Recommended principle.
Tamper-evident action logs as a legal minimum for high-autonomy agents. Logs should be (i) sufficiently granular to reconstruct action sequences; (ii) integrity-protected; and (iii) retention-governed in a way consistent with data protection and purpose limitation.
Statutory language snippet (illustrative)
“High-autonomy agentic systems shall generate and preserve action logs recording tool invocations, external system calls, authorisation checks, and state transitions necessary to reconstruct material actions. Logs shall be integrity-protected and made available to competent authorities and courts under conditions preserving confidentiality and trade secrets.”
Rights and obligations of agents. EU liability policy indicates that reform should begin by clarifying AI systems have neither legal personality nor human conscience, making “agent rights” a poor fit for EU legal structure in the short term. Agentic law would instead focus on rights of affected persons (consumers, workers, citizens) and duties of operators/providers, consistent with the AI Act’s protection goals (health, safety, fundamental rights).
Recommended principle
No agent personhood; functional identity instead. Require a stable technical identity for the agent instance, delegation credentials, and a responsible operator of record, without creating agent legal subjectivity.
Enforcement architecture and cross-border operation. The AI Act embeds market surveillance and governance mechanisms and provides staged implementation and scheduled evaluation, including consideration of whether a Union agency is needed to resolve shortcomings in enforcement structure. The cross-border rule that third-country providers are in scope where outputs are used in the Union creates a template that agentic law can replicate for agentic operations and delegation credentials. Private international law remains the litigation backbone for cross-border private disputes (jurisdiction and applicable law).
Concrete example
A non-EU provider offers an agent service used by EU businesses; an incident occurs in one Member State but the agent’s compute, data stores, and vendor chain are distributed. Enforcement will require a responsible EU representative (a pattern already used for high-risk systems under the AI Act) and clear evidence access rules.

Standards, certification, and insurance. Certification is already a core EU tool: AI Act presumption-of-conformity mechanisms via harmonised standards and common specifications, and the Cybersecurity Act’s certification framework via ENISA-driven schemes. The Cyber Resilience Act adds CE marking pathways for software products and ties conformity assessment to vulnerability handling. The PLD explicitly anticipates attention to insurance availability in evaluations. Agentic law can convert “agent assurance” into a certifiable artefact: safety case, logging profile, delegation credential scheme, and incident response maturity.
Emergent behaviour and multi-agent interactions. The AI Act concept of “reasonably foreseeable misuse” includes interaction with other systems, including other AI systems, which is an EU-recognised basis for imposing lifecycle and risk-management duties. Agentic law would make this explicit: multi-agent systems can produce emergent strategies and externalities not reducible to any single agent’s objective function. This poses (i) causation complexity, (ii) attribution complexity, and (iii) market manipulation/collusion risks.
Concrete example.
Multiple pricing agents interact in a marketplace and converge on supra-competitive pricing without explicit coordination, raising an enforcement question under competition law and an evidential question about how behaviour emerged. The EU’s competition framework and the Court’s recognition that compliance with other norms can be relevant in competition assessments supports integrated enforcement thinking.
Recommended principle
System-of-systems accountability. For multi-agent deployments above a threshold (scale, autonomy, market impact), require documented interaction risk analysis, monitoring of emergent patterns, and a duty to intervene when systemic risk indicators trigger.
EU legal foundations and gaps
The EU’s legal basis logic for additional horizontal rules will typically track internal market harmonisation (Article 114 TFEU) and must satisfy conferral, subsidiarity, and proportionality. Fundamental rights framing will remain anchored in the Charter (privacy/data protection, non-discrimination, due process), which is also the explicit protection objective referenced in the AI Act.

The transition from "Move fast and break things" era to the "Move autonomously and log everything"
Every regulator will be at some point trying to govern multi-agent systems that autonomously rewrite supply chains using frameworks originally designed to regulate blenders and toasters. The AI Act itself contains structured timelines for codes of practice and staged application, signalling that the EU expects iterative, standardisation-driven maturation, not a one-shot codification. Agentic law isn't going to drop from the sky as a fully formed regulation; it is designed as a strategic overlay that exploits existing EU governance machinery—from market surveillance and conformity assessment to massive certification ecosystems. And it only mutates into a hard, horizontal statute if soft-law and sectoral measures fail to prevent total market fragmentation and algorithmic harm. The value is real, the chaos is real and the distance between them is the width of a well-written specification.
So what does this tell us about where AI is heading?
It tells us the regulatory threat surface is evolving in distinct phases, and the first wave is soft law. We are looking at Commission and supervisory guidance for “agentic systems” that will define table stakes like delegation credentials, logging profiles, and standard clauses, completely aligned to AI Act governance and GDPR automated decision-making rules.
Then comes the sectoral rug pull. Regulators are going to weave "agentic operations" directly into finance through DORA operational resilience, into cybersecurity via NIS2 and the CRA, and into critical infrastructure by defining these autonomous workflows as strictly regulated outsourcing or high-risk functions.
However, if those fail to contain the emergent behaviors of agents vibe working their way through the internal market, the final boss is a new horizontal statute: the “Agentic Systems Regulation”. This type of framework defines agentic systems, sets rigid autonomy-based classes, standardizes delegation credentials, mandates tamper-evident action logging for high-autonomy deployments, and creates minimum attribution rules for contracts and harms. It references the AI Act, CRA, and PLD mechanisms for conformity and redress to ensure there is enough meat on the bone for regulators to actually prosecute. Barely 30 minutes to deploy a multi-agent workflow. And now we're looking at a decade of mandatory compliance monitoring.
Now let me be honest about where it works and where it doesn't.
For agency oversight, the EU will either aggressively expand the mandates of existing AI Act institutions or they will spin up a highly specialised supervisory node entirely dedicated to "agentic operations". The AI Act itself already anticipates the evaluation of enforcement structures and the possible need for an entirely new EU agency to wrangle these systems. Then they hit you with the certification regime. You have to adapt EU cybersecurity certification and AI conformity assessment pathways into a massive, combined “agent assurance” certification. We are talking mandatory compliance for delegation credential schemes, granular logging profiles, resilience profiles that require SBOM-style component identification, and rigorous safety-case reviews.
And because the capital intensity of these high-autonomy agents creates massive financial exposure, expect targeted mandatory insurance for defined “high-autonomy agent operators”. This is heavily informed by the PLD’s focus on insurance availability and the broader policy push toward establishing predictable routes for redress and consumer confidence.


