top of page
  • Writer's pictureBJIL

Developing an International Framework for AI Citizenship After the UN’s AI Moratorium

Updated: Oct 18, 2021

About the author: Sundar P. L. is pursuing his final year of law at Christ (Deemed to be University), Bangalore. He is deeply passionate about policy-making and has gained experience in subjects such as Cyber Law, International Law, Constitutional Law, Environmental Law, Competition Law, and Criminal Law. He has worked with governmental bodies and think tanks on various projects and is currently leading a committee on public policy and governance in the hope of executing institutional change.

"Robot/Human love" by Louis Smith, available here.


In 2017, Saudi Arabia legally recognized a robot named Sophia as its citizen, and Tokyo legally recognized artificial intelligence (AI) bot Shibuya Mirai as its resident, stirring debate over the jurisprudence of personhood. In light of the recent United Nations (UN) moratorium on the sale and use of AI systems, it is essential to develop an international legal framework to address states’ potential obligations to honor the citizenship status that other states confer on AI. Developing such a framework raises two critical questions. First, does existing international law allow an international framework to define citizenship? Second, what standards would such a framework use?


The AI Moratorium May Drive Nations to Come to a Consensus on AI

The AI market has grown exponentially throughout the 21st century. During this short timespan, AI experts have proposed numerous legal guidelines, including the Ethics Guidelines for Trustworthy AI, Asilomar AI Principles, and Civil Law Rules on Robotics, to minimize malicious manipulations of technology. Legislative bodies have begun to incorporate AI guidelines in regulations. For example, in April 2021, the European Commission moved to regulate AI under harmonized rules through the Artificial Intelligence Act. Despite these isolated movements towards AI regulation, a universal international framework governing AI treatment is still missing. Without universal or significant ratification, the aforementioned AI guidelines cannot qualify as conventional international law. The recent UN moratorium on AI could prompt nations to reach a consensus on AI treatment in several areas of law, including citizenship. For AI, an eminent subject with little international regulation, building an early consensus can save the international community enormous amounts of time and resources that would otherwise be spent on ex-post legal analyses. Analyzing the metrics and criteria that different nations value when endowing citizenship to AI could help us reach that legal end-goal earlier, which is essential for globalization in an era of open data.


The Need for an International Framework that Grants Citizenship Status to AI

It is a settled principle that legal personality is a requirement to receive diplomatic protection. If an AI device commits crimes in a foreign state, the home state may request that the foreign state hand over the AI for prosecution. This will likely invoke extradition treaties. To prevent international conflict, it is essential to evaluate the powers of a foreign state to effect a decision on an AI citizen that would not conform to the originating state’s decision.


To provide clarity for such a scenario, it would help to examine existing internationally-recognized thresholds for conferring citizenship on an entity and obligations for a state to respect a person’s citizenship granted by another state.


Applying Accepted Principles of International Law to the Current Scenario

In the 1995 Nottebohm case, the International Court of Justice (ICJ) defined “nationality” as a legal relationship based on "a genuine connection of existence, interests, and sentiments." Nationality therefore reflects a genuine connection and formalizes the bond of allegiance. When such a genuine link or bond of allegiance is not adequately present, nationality may be terminated.


Although the ICJ’s failure to establish exhaustive metrics for determining a “genuine link” of a national to their country was criticized in the Flegenheimer case, tests for establishing a “genuine link” have evolved over time. In Romano v. Comma, the Egyptian Mixed Court of Appeal held that the test of birth in the territory was adequate to prove a genuine link. This is in line with two established principles of international law: ”jus soli,” which deems a person a national of a state by virtue of their birth in that state, and “jus sanguinis,” which deems a person a national of a state by virtue of their descent from a national.


Does the creation of AI technology in a state constitute its “birth” in that state, or does birth only apply to human birth in the context of citizenship? Article 15 of the Universal Declaration of Human Rights (UDHR) protects the right to a nationality and specifically applies to natural persons. The articles of this declaration were drafted to uphold human rights, not the rights of any entity imbued with artificial legal personality. Therefore, the right to a nationality is a human right under the UDHR, so failure to provide this right to an entity with artificial legal personality is not a UDHR violation.


The “Civil Law Rules on Robotics,” adopted by the European parliament’s Committee on Legal Affairs, takes a different approach by examining robots’ potential legal duties when making autonomous decisions as “electronic personalities.” However, imposing duties on an entity as an electronic personality does not necessarily mean that the entity has an inherent right to citizenship. Similarly, corporations are “artificial legal personalities” with legal duties, but they are not treated as citizens due to the “corporate veil.” Conversely, the fact that AI holds rights and duties that differ from those held by natural persons does not negate the possibility of AI citizenship. Different classes of citizens in a state may hold different rights and duties if they are reasonably classified. Therefore, an entity’s legal duties cannot be the sole factor for determining citizenship status.


Multiple sources of international law, such as Article 1 of the Convention on Certain Questions relating to the Conflict of Nationality Laws, Article 3 of the European Convention on Nationality, and the Nottebohm case, have established that “nationality” should ultimately be defined by states’ internal laws. However, they also also emphasize that a state’s power to confer citizenship status is limited and must be consistent with the principles of international law, leaving the door open for international guidelines that define citizenship for AI systems.


The Way Forward

An effective AI citizenship framework should consider additional factors like an entity’s agency and consciousness because they are indicative of its ability to form allegiance to a nation, which is the core of the Nottebohm ruling. Questions of agency and consciousness require comparisons between humans and AI. In the case of Saudi Arabia, Sophia is an AI device without a central nervous system. Unlike her human counterparts, Sophia has difficulty establishing a genuine mental consciousness to abide by a duty to the state, avail a right, and form an allegiance. While artificial neural networks have progressed, the genuine agency of AI systems in processing signals that are determined by manually drafted rules is questionable.


Until now, the Turing test was arguably the most objective yardstick for measuring an AI system’s capacity to abide by a duty in the truly “human” sense. The test evaluated an AI system’s ability to mimic human responses, providing insights into a system’s capacity to “think” and perform acts and omissions that amount to their “duties.” However, recent discoveries of the human mind may justify using new tests that measure factors beyond mimicry, meaning that AI systems which previously passed the Turing test may lack the capacity to abide by a duty in a human sense. Developments in hardware and software in deep learning and neural networks show that states have not reached a consensus, domestically and internationally, on a standard for assessing an AI system’s ability to perform duties like a human.


Conclusion

Without adequate conventional or customary international law, states do not have any binding international obligation to honor citizenship status conferred on existing AI technology. It would be improbable for a significant number of states to reach a consensus on a particular objective threshold for establishing that an AI system has enough human abilities to qualify as a legal person, and thus, a citizen. Therefore, customary international law has a long way to evolve to reach a point where AI’s citizenship status is universally honored.


0 comments
bottom of page