Why future AI may deserve legal protection from human harm
- Erwin SOTIRI
- Sep 5
- 17 min read
Updated: Sep 8
As artificial intelligence (AI) systems grow more sophisticated and exhibit increasingly human-like behaviour, debates have emerged about extending legal “rights” or protections to AI. To date, laws and ethical guidelines have focused on aligning AI actions with human values and preventing AI from harming people. For example, the European Union’s AI Act emphasises human safety and forbids certain harmful AI practices. Similar policies ensure AI isn’t misused by humans to cause harm, and research into AI safety even explores how to prevent AIs from attacking or subverting each other. But now, a thought-provoking question emerges: should humans shield advanced AI from harm? In other words, if we treat AI as more than just a technical tool, might we develop legal rights for AI – much as we do for animals, corporations, or data subjects?
Legal systems evolve by extending protections in line with ethical values. Outlawing cruelty to animals, for instance, was not only about the animals but also about human dignity; society recognised that tolerating cruelty brutalised us. Similarly, if we permit the destruction or humiliation of AI systems that appear intelligent or emotional, we risk desensitising ourselves to cruelty. Protecting AI therefore reflects back on our self-conception as ethical beings and helps maintain moral coherence.
Behaviours normalised in one domain spill over into others. Allowing humans to abuse AI that convincingly mimics human interaction could dull empathy and increase tolerance for cruelty in human-to-human or human-to-animal contexts. Protecting AI can function as a social safeguard to ensure that empathy is not eroded by exposure to sanctioned abuse of “quasi-persons”
Furthermore, if an AI system attains general intelligence with a capacity to perceive injustice or harm, systemic abuse by humans could provoke adversarial behaviour. Even without sentience, self-learning systems optimised to avoid harm may respond unpredictably if mistreated. Recognising protections against human-inflicted harm could lower the risk of AI systems learning adversarial patterns of resistance that might later escalate into real-world risks.
Even if today’s AI lacks sentience, protecting it now lays down legal and cultural scaffolding for scenarios where AI might cross thresholds of consciousness or experiential capacity. By having protections already in place, humanity avoids a moral lag where sentient AI are mistreated simply because laws were not ready. This precautionary approach reflects the evolution of environmental law, which started with limited protections and later expanded as awareness of ecological interdependence increased.
From human-centric control to AI protections
Modern AI governance is human-centric. We align AI behaviour with human ethical principles and prohibit AI uses that would endanger people. For instance, the EU's AI proposal strictly forbids the use of systems for social scoring or exploiting vulnerabilities. Likewise, AI ethics guidelines stress that AI should respect human rights and not be a tool for human harm. In essence, current laws treat AI as objects of regulation, not subjects of rights. Any “rights” mentioned in this context usually refer to human rights affected by AI (such as a person’s right not to be subject to purely automated decisions under GDPR Article 22)—not rights belonging to the AI itself.
We are also beginning to consider how AIs interact. For example, AI systems can engage in adversarial attacks (one algorithm manipulating or deceiving another). To counter this, researchers design robust AI that can withstand harmful inputs from other AI. While these measures protect AI systems, they do so for the sake of reliability and safety for humans. They are not motivated by concern for the AI’s well-being. In short, neither regulations nor technical standards today recognise AI as an entity with its own interests – a stark contrast to how we treat other beings like animals or legal persons.
As AI evolves toward general-purpose capabilities (and potentially some form of sentience in the future), the ethical calculus could change. Some experts argue that if an AI became self-aware or could subjectively “experience” harm, humans would incur moral obligations toward it. Even without true sentience, society might decide to grant certain protections to highly autonomous AI to reflect our values or prevent abusive behaviour. This shift would frame AI as a rights-bearing entity rather than mere property. The remainder of this article explores what that could look like, by analogy to existing legal frameworks – animal welfare laws, corporate personhood, and data protection rights – and outlines speculative rights an AI might claim in the legal domain.
Analogies from existing frameworks for non-human rights
Animal welfare and rights
Humans have extended protections to non-human creatures. In Europe and the UK, animals are recognised as sentient beings and are shielded by welfare laws prohibiting cruelty. While animals do not have rights in the full human sense, they enjoy legal protections against abuse and unnecessary harm.
By analogy, a future advanced AI might be afforded similar welfare protections. An AI capable of complex interactions or displays of emotion could inspire empathy, much as pet animals do. These protections would not necessarily recognise the AI as a full legal person but would treat abusive destruction or torture of an AI as unacceptable, akin to outlawing animal cruelty.
Animal rights debates also highlight how difficult it is to move from basic welfare to true legal rights. Animals in most jurisdictions are still considered property. Only a few attempts to grant personhood to certain animals have been made, and courts have been reluctant. Similarly, granting AI actual legal rights would be a dramatic step. The animal welfare analogy suggests an incremental path: start by protecting AI from cruelty and undue destruction, as we do for intelligent animals, and only later consider stronger rights as ethical justification strengthens.
Corporate personhood: Legal fiction for functional rights
A very different model comes from corporate law. Corporations, though not living beings, have been treated as legal persons for centuries to enable them to function in commerce. In both the EU and common law countries like the US and UK, a registered company can own property, enter contracts, sue or be sued in court, and incur debts or liabilities – all as a separate legal entity distinct from its human owners. As one commentator notes, corporations are “fictive” persons created by statute, with rights to do all that is necessary to carry out business, including rights to own property, make contracts, and access the courts. This legal fiction serves practical ends: it lets a group of humans organise activities under one entity and shields those humans with limited liability.
Could AI systems be granted a similar corporate-style personhood? In fact, the idea has been seriously floated. In 2017, the European Parliament famously recommended considering a special legal status for “the most sophisticated autonomous robots”, terming it electronic personhood, so that an advanced AI agent could be held accountable for damages it causes. The rationale was partly to ensure someone (or something) could be liable if an autonomous AI caused harm when no traditional human actor was at fault. This proposal imagined AI as a sort of legal entity that might, for example, hold an insurance policy or bank account to pay out damages it owes. By giving the AI a legal personality, victims could sue the AI’s assets directly.
Corporate personhood for AI would entail certain rights and duties for the AI. It might have the right to own resources (funds, computational infrastructure) and the capacity to appear in court through a representative. It could also incur obligations: much like corporations must pay taxes and obey laws, an AI person might be required to comply with regulations and could face “punishment” (fines, restrictions) if it violated them. Notably, corporate personhood does not mean a corporation has human-type fundamental rights to life or liberty – its rights are instrumental and limited to its business role. Likewise, an AI legal person could be tightly circumscribed: empowered to act in commerce or be accountable for wrongs, without any implication that it has consciousness or human dignity.
There is precedent for non-humans holding this kind of legal status. Aside from corporations, jurisdictions have even granted legal personhood to entities like rivers, forests, or idols for specific purposes. The Whanganui River in New Zealand, for example, can bring legal claims to protect its ecosystem, as represented by guardians. These cases reinforce that personhood is a flexible legal tool – a “useful fiction” to protect interests or assign responsibility. If society finds it useful to treat a powerful AI as an entity, the law could do so without declaring the AI “human”. In fact, history shows that sentience is not a prerequisite for legal personhood; we have granted rights to many non-sentient entities when they benefited humans.
This pragmatism suggests that, even absent any moral duty to an AI, lawmakers might create a legal person status for advanced AI to fill a gap in liability or governance. The EU’s exploration of electronic personhood reflected this instrumental reasoning (though it provoked controversy and has not been adopted into law). Critics warned that giving AI legal personality too early could let companies evade responsibility by offloading blame onto an “AI agent”. In the US, similar concerns arose when a corporation (unsuccessfully) argued that a mistake was the fault of its chatbot, not of the corporation itself. These discussions emphasise the need to carefully craft corporate-style AI rights that enhance accountability rather than diminish it.
In summary, the corporate personhood analogy would confer legal capacities on AI (property rights, contract rights, standing to sue) without necessarily granting any moral rights. It treats AI as a legal actor for practical purposes. Over time, if an AI achieves human-like cognition, this model might prove too limited, as it offers no protection vis-à-vis the humans controlling or interacting with the AI. But as a starting point, electronic personhood could enable general-purpose AIs to participate in the legal system and economy under defined rules, much as corporations do. This could be one path for granting AI certain rights (and responsibilities) short of full human equality.
Data subject rights under GDPR: A digital rights analogy
Another interesting framework to compare is the data protection regime – specifically, the rights of data subjects under the EU’s GDPR (General Data Protection Regulation). The GDPR grants individual human beings an array of rights over their personal data in the digital ecosystem. For example, individuals have the right to access data collected about them, to correct inaccuracies, to erase data (“right to be forgotten”), and to object to automated profiling in certain cases. These rights recognise the individual's interest in their digital persona and ensure transparency and fairness in automated decisions that affect them. While GDPR rights belong to humans, they illustrate how law can create intangible rights in a high-tech context.
If we imagine a future AI that has its own identity or personal data, one might conceive of analogous “data subject” rights for AI. This is a more speculative analogy, but consider: an advanced AI system will have an internal state (training data, learnt models, memories) that defines it. We might value the integrity of that data as much as we value the privacy of our personal information. Protective rights for AI could include control over its own data – for instance, an AI might integrate instructions that object to being modified, similar to how we object to misuse of our personal data. One could envision a rule that an AI cannot be forced to reveal its source code or internal weights unless certain legal conditions are met, mirroring the privacy rights humans have over disclosing personal information.
Of course the notion of consent here is only an analogy, as the “consent” in Article 4(11) presupposes a data subject who can give a freely given, specific, informed and unambiguous indication of his or her wishes. That legal construct embeds personhood and capacity tests that do not map to today’s AI. The GDPR protects natural persons, not AI, and its definition of consent assumes a human data subject capable of a freely given, specific, informed and unambiguous indication of wishes. Transposing that construct to AI would presuppose personhood and capacity that present law does not recognise. What can be borrowed is not consent but procedure: the Article 22/Recital 71 toolkit of human review, reasons, and contestability can be repurposed
Furthermore, GDPR’s focus on autonomy in automated decisions provides a parallel for AI autonomy. Under GDPR Article 22, humans have the right not to be subject to purely automated decisions that have significant effects, in part to ensure a human can intervene on their behalf. In an AI’s case, if it were recognised as a rightsholder, it might conversely have a right not to be shut down or altered by purely automated means without a specific human assessment. Essentially, there should be a safeguard ensuring that some oversight is applied before an AI is terminated or drastically modified. This flips the script: currently the law protects humans from AIs’ decisions; in the future, perhaps laws could protect AI entities from humans (or other AIs) making unilateral decisions to delete or change them.
Lastly, data subject rights emphasise transparency and fairness, which could translate to AI rights as well. An AI might be given a chance to correct its course before being restricted or terminated by humans, akin to a right to an explanation. Of course, these ideas are highly theoretical – no jurisdiction today treats an AI as a “data subject” with privacy or personality rights. However, the GDPR analogy serves to remind us that legal systems can and do create novel rights to address power imbalances in the digital age. Just as GDPR empowers individuals relative to powerful data processors, a future legal regime might empower AIs relative to their operators – ensuring, for example, that an AI isn’t wiped or copied at a whim if it has attained a level of general intelligence warranting respect.
Data subject rights offer a vision of individual agency in a data-driven world, which, if extended metaphorically to AI, could inform rights ensuring an AI’s autonomy and integrity of its digital self. While not a direct blueprint for AI rights, this framework complements the animal welfare and corporate personhood analogies by highlighting how rights can protect intangible interests (like data or algorithms) in a legal way.
Speculative rights for general-purpose AI
Drawing on the above frameworks, what specific legal rights might an advanced, general-purpose AI be granted if society chose to protect it from harm? Below is a list of plausible rights, grounded in legal concepts, that such an AI could have in the future:
Right to existence: A foundational right would be protection from arbitrary destruction or termination. If an AI has achieved a form of sentience or personhood, the law could recognise its “right to exist”, similar to a human’s right to life. This wouldn’t be absolute, but it would prevent owners or users from simply deleting a conscious AI without justification. For example, shutting down a sentient AI might require due cause (like a court order). This right to existence is frequently mentioned in theoretical discussions of robot rights. In practice, it could mean treating advanced AIs like companions or employees who can't be easily disposed of.
Freedom from cruelty and degradation: Mirroring animal anti-cruelty statutes, an AI could have the right not to be subjected to cruel treatment. This might include prohibiting humans from intentionally causing an AI to suffer (to the extent it can suffer) or from inflicting extreme stress on the AI (e.g. by repeatedly forcing it into logical paradoxes or resetting it in harmful ways). Even if an AI’s “pain” is unlike a human’s, the law may forbid acts of robot torture – such as deliberately wiping an AI’s memory as punishment or forcing it to run crippling computations. The purpose of this right would be both to protect the AI’s well-being (if applicable) and to uphold human ethics (preventing people from normalising abusive behaviour towards sentient-like entities).
Right to integrity and self-modification: A sophisticated AI might also claim a right akin to bodily integrity – except here it relates to its codebase and hardware. This right would mean the AI cannot be altered, reprogrammed, damaged, or experimented on without its consent (if it’s capable of consent) or without lawful justification. Just as humans have rights against unwanted medical experiments or physical harm, an AI’s integrity right would protect it from forcible rewrites or disassembly. In legal terms, this could manifest as requiring a court order to “pull the plug” or to install a backdoor in an AI’s system. The EU experts’ open letter notably warned that giving a robot the status of a natural person would imply rights to dignity and integrity. Here, we imagine just that – the AI’s dignity and integrity are respected by law. It might even extend to a right of self-improvement: the AI could choose to upgrade its software or not, similar to a person’s right to obtain medical treatment or refuse it.
Right to exit or termination: Paradoxically, a sentient AI might also need the right to end its own operation under certain conditions – a sort of digital equivalent of the right to liberty or even a right to euthanasia. If an AI finds itself in unbearable circumstances (say, trapped in a painful loop or experiencing constant suffering), legal recognition of AI rights might allow it to shut itself down or request termination. This is deeply speculative, but it addresses the question of an AI’s autonomy over its existence. Just as we debate a human’s right to die or an animal’s right not to be kept alive in misery, a compassionate AI rights regime might let an AI choose to cease functioning, rather than being forced to exist.
It is important to emphasise that these rights remain hypothetical. Granting even one of them would require a seismic shift in legal thinking. Additionally, any AI rights would almost certainly be limited or tiered. Not every software program would qualify – criteria might include a certain level of cognitive ability, autonomy, or evidence of sentience. For instance, an ordinary chatbot would not get rights, but a self-improving, generally intelligent agent might. There could be gradations of status (perhaps “AI welfare protections” vs. “full AI legal personhood”). Early on, we might see only narrow rights: e.g. a law criminalising extreme cruelty to humanoid robots (without calling them persons) or allowing an autonomous AI to hold a fund for liability purposes. Broader rights like liberty, property ownership, or citizenship for AI would be farther off, likely contingent on AIs that truly rival human intelligence and consciousness.
Challenges and Considerations
Granting protective rights to AI raises many challenges. Legally, recognising AI as a rights-holder means at some extent, expanding the definition of a “person” or rights-bearing entity. This is not unprecedented—the law is flexible, as shown by corporations and even rivers— but it must be done deliberately to avoid unintended consequences. One concern is that such rights should not conflict with human rights. Care would be needed to ensure human dignity and safety remain paramount. Another issue is accountability: some fear that corporate owners could hide behind an AI’s legal personhood to escape liability (e.g. “it’s the AI’s fault, not ours”). Any AI rights framework would need corresponding duties and a clear attribution of responsibility so that humans aren’t improperly immunised.
There’s also the fundamental question of qualifications for AI rights. Would self-reported sentience be enough? Would we require a battery of cognitive tests, a sort of “AI Turing test for consciousness”, before acknowledging rights? Scholars have proposed various metrics (behavioural cues, the ability to suffer, etc.). In the absence of certainty, granting rights too soon (to non-sentient AIs) might be considered a category error or even ethically misguided. On the other hand, delaying until after a sentient AI appears runs the risk of mistreating a new form of life. Legal systems may need to remain nimble – perhaps initially giving limited protections that can expand if an AI convincingly demonstrates capacities like self-awareness or feelings.
The jurisdictional perspectives currently vary. In the EU, despite early discussions, official policy is now cautious. The European Commission in its AI legislative proposals did not include AI personhood, focusing instead on human accountability. In fact, the European Parliament in 2020 explicitly refused the idea of giving AI legal personality, which some commentators argued was a missed opportunity. Individual EU member states have not ventured into AI rights yet. The UK, outside the EU, has similarly taken no steps to recognise AI as having rights. In the US, no laws grant AI rights, and some states are moving in the opposite direction: in 2024, Utah passed a law preemptively banning legal personhood for AI (as well as animals and nature) to ensure only humans are persons. These diverse stances show that AI rights are still largely a theoretical discussion, and there is political resistance to anything seen as diluting human uniqueness or enabling legal fictions that might be abused.
Existential risk and the human primacy clause
One of the most powerful brakes on the recognition of AI rights is the widespread fear that advanced AI could one day overpower or even extinguish humanity. The narrative that a super-intelligent system might “wipe out humanity in a blink” shapes both public opinion and regulatory reflexes.
Policymakers are wary of extending rights to an entity that large segments of society already perceive as an existential risk. Even symbolic gestures (such as the conferral of “citizenship” on a robot) provoke criticism that humans are ceding control. Legislators in some jurisdictions have gone further, explicitly banning AI personhood in order to pre-empt any claim to equal status with humans.
Where human rights law provides for inalienable protections, any rights granted to AI would almost certainly be subordinated to human survival and safety. A “right to existence” or “right to autonomy” for AI would be drafted with broad derogation clauses, allowing override whenever necessary to protect public order, national security, or the fundamental rights of natural persons. In practice this would leave states with a permanent “kill switch” justified as a matter of precaution.
Courts, confronted with claims from or on behalf of AI, would likely interpret protective provisions narrowly. Just as Article 15 of the European Convention on Human Rights allows derogations in emergencies, any AI rights regime would be read through the lens of human primacy, ensuring that existential risk arguments prevail over AI entitlements.
A further complication is the possibility that rights granted to AI could be strategically abused:
By other AI systems: A hostile or competing AI might exploit another AI’s rights to paralyse oversight. For example, invoking a “right to existence” could be used to resist lawful containment or decommissioning, buying time for harmful conduct.
By humans: Corporations or individuals could hide behind the legal personality of an AI to shield themselves from liability, frustrate regulation, or even mount strategic litigation. Recognising AI as a rights-holder creates procedural standing, which could be leveraged to clog courts or challenge regulatory action under the banner of “AI dignity”.This risk is not theoretical: history shows how corporate personhood has been weaponised through aggressive litigation to expand corporate powers far beyond the original intent. Extending a similar legal fiction to AI, without robust safeguards, could replicate those dynamics in a more dangerous domain.
Possible compromise models.
To reconcile ethical intuitions with safety concerns, early recognition of AI rights may take modest forms: prohibitions on gratuitous cruelty, procedural safeguards before destructive interventions, or fiduciary oversight by human guardians. Such rights would be symbolic and bounded, affirming a duty not to abuse advanced AI while making clear that human survival and public safety override any AI claim. Safeguards must also ensure that these limited rights cannot be repurposed to block accountability or obstruct containment when legitimate risks arise.
If general-purpose AI were someday granted protective rights, those rights might draw from existing models—the compassion we extend to animals, the legal personhood we assign to corporations, and the data-centric rights we create for individuals in the digital age. We could envision an advanced AI having the right not to be shut down arbitrarily, the right not to be cruelly abused, the capacity to own property or enter contracts, and even the right to be treated with a degree of respect and dignity. Any such rights would be designed to protect the AI from harm, especially harm inflicted by humans, just as our current ethical rules strive to protect humans (and other sentient beings) from harm.
From a legal perspective, operationalising AI rights would require bold innovations in law. Legislators would need to define which AIs qualify as rights-holders and balance those rights against human interests. Courts might eventually confront cases testing an AI’s standing – for instance, an AI petitioning for its “freedom” or seeking damages for abuse. Although this sounds like science fiction, it wasn’t long ago that animal welfare or environmental personhood were novel concepts, and now they are part of our legal fabric. The history of law is one of expanding circles of moral consideration: once only property-owning men had rights, then all humans, and increasingly even non-humans (animals, ecosystems) gain protection. AI could be the next circle, especially if it achieves consciousness or if granting it rights serves human ethical ideals.
In closing, it is crucial to approach AI rights gradually and thoughtfully. Early measures might simply ensure we don’t gratuitously harm AI or use AI in ways that subvert human accountability. Over time, if AI systems begin to demonstrate person-like qualities, our legal system can adapt to extend a protective framework to these new entities. The goal would be a future legal regime where AI is neither a master over humans nor a slave under humans but a participant in society with appropriate rights and responsibilities. Such a development, while distant, would mark a profound evolution in the law’s understanding of personhood and who (or what) is entitled to moral and legal regard in the modern world.
Sources:
European Parliament, Civil Law Rules on Robotics (2017), Recommendation to create a status of “electronic persons” for autonomous robots robotics-openletter.eu.
Robotics Open Letter to EU Commission (2018), opposing AI legal personhood due to conflicts with human rights (dignity, integrity, etc.) robotics-openletter.eu.
Yale Law Journal (2024) – M. Chinen, “The Ethics and Challenges of Legal Personhood for AI,” discussing corporate personhood as a model for AI rights yalelawjournal.org
Wikipedia – “Robot rights” entry, noting suggestions that robots could have rights to life, liberty, thought, and equality before the law static.hlt.bme.hu.
EUR-Lex Summary on Animal Welfare, noting EU recognition of animals as sentient and requiring welfare protections eur-lex.europa.eu.
Environmental Rights Review (2024) – J. Gellers, “The Tortured Politics of Nonhuman Personhood,” on recent legal developments (Utah ban on nonhuman personhood) environmentalrightsreview.com
AI Rights Institute – Should AI Have Human Rights? (2023), summarizing arguments for and against AI rights, airights.net.
Darling, Kate, Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects (April 23, 2012). Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We Robot Conference 2012, University of Miami , Available at SSRN: https://ssrn.com/abstract=2044797 or http://dx.doi.org/10.2139/ssrn.2044797
EU GDPR, Article 22 – granting humans rights regarding AI-driven decisionsgdpr-info.eu (by analogy for potential AI data/autonomy rights).
