About the author: Muhammad Siddique Ali Pirzada is a second year LL. B (Hons.) student in Pakistan College of Law (University of London) and an alumnus of Repton School Dubai, UAE. He presently serves as Sub-Editor at LEAP - Pakistan and has previously interned at the Supreme Court of Pakistan and Bhandari Naqvi Riaz (BNR). As a active columnist for various international media outlets, he writes about Public International Law, Governance, International Humanitarian Law, Constitutional Law, International Environmental Law, and International Relations. He can be reached at siddiquepirzada1241@gmail.com.
Introduction
The concept of Artificial Intelligence (AI) is far from novel, tracing back to the 1950s when Alan Turing established the theoretical groundwork for its development. During this era, pioneering computer programs capable of intelligent problem-solving emerged, marking the inception of AI. In the last two decades, AI and machine learning have been the driving force behind a plethora of digital innovations, encompassing search engines, recommendation algorithms, unmanned aerial vehicles, autonomous vehicles, and facial recognition systems. The recent popularity and accessibility of innovative chatbots, image generators, and other expansive language models also marked the surge of "generative AI." These models possess the potential to catalyze societal and generational advancements unparalleled by any previous AI-powered technology. Their human-like attributes, adaptability, and omnipresence have fueled both opportunities and concerns.
The prospective advantages of AI are vast, spanning from aiding global endeavors in addressing climate change to the functional task of enhancing workplace productivity. However, the perils are equally substantial, encompassing the utilization of AI to magnify misinformation, orchestrate cyber assaults, and exacerbate prejudice and inequity, along with apocalyptic assertions regarding AI's potential to surpass human intelligence. Amidst a backdrop of intense geopolitical rivalries, where AI is coveted by all but consolidated among a select few, these discussions, often characterized by some as hyperbolic and dichotomous, unfold amidst the whirlwind pace of technological advancement.
At the core of these discussions lies a fundamental conundrum: how to harness the immense promise of AI for beneficial ends while mitigating its risks and ensuring equitable access to its capabilities? This article argues that achieving this delicate balance necessitates robust AI governance across national, regional, and global domains. Importantly, adherence to international legal norms must serve as the foundational premise for such efforts.
AI Governance as a Global Dilemma
Similar to other digital advancements, artificial intelligence transcends geographical limitations, exerting profound influence on individuals and societies worldwide. The ramifications of AI range from assessing an individual's creditworthiness or curating their social media stream to advancing weaponry and shaping the global informational landscape. Therefore, the governance of AI is not solely the responsibility of corporations but a matter that concerns all nations. Moreover, devising effective AI governance mechanisms is a formidable challenge, necessitating collaborative efforts from various stakeholders with diverse cultural, social, political, and professional backgrounds. This indicates that states and other pertinent entities, such as private enterprises, civil society, and academic institutions, must collaborate. International law offers a well-established common framework from which global AI governance can be formulated and implemented.
As an illustration, International Human Rights Law acknowledges a compendium of intrinsic rights and liberties that have been universally ratified by all states in overarching documents such as The Universal Declaration of Human Rights, The International Covenant on Civil and Political Rights, The International Covenant on Economic, Social and Cultural Rights, and Customary International Law. The precise extent of each entitlement varies among disparate states owing to their multifarious social, cultural, and political milieus. Nonetheless, International Human Rights Law embodies a fundamental baseline that can function as a reference point for States and other involved parties in contemplating methods to safeguard human rights while fostering technological advancement.
Navigating Ambiguity and Competing Forces
Amidst the uncertainties of AI’s influence on society, influential players such as the US, UK, EU, China, and India are engaged in a fervent pursuit to advance AI technologies. International legal frameworks serve as beacons of clarity and assurance within this competitive landscape. Standards of due diligence guide the actions of states and corporations to mitigate AI-related risks, while non-intervention doctrines safeguard against undue interference in the affairs of other nations, including AI-driven manipulation. Despite the sparse enforcement mechanisms of international law, its established norms promote responsible behaviors in the digital domain, fostering stability and prosperity. These tenets remain indispensable in the AI epoch, both in virtual and physical realms.
International Law’s Existing Jurisdiction and the Possible Extension Over AI
International law transcends theoretical considerations; it encompasses the utilization of AI technologies by states, individuals, and corporations. States find themselves entangled in a complex web of treaties, customary international law, and overarching legal principles, binding them to multifaceted obligations. Corporations, on the other hand, bear a weighty societal mandate to uphold human rights, ingrained within the fabric of their corporate social responsibility. Meanwhile, individuals are strictly warned against venturing into international criminal activities, thereby preserving the integrity of the global legal order.
Essentially, international law remains neutral towards technology. This means that its regulations and precepts apply equally to both antiquated and emerging technologies. As highlighted in the Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, ICJ Reports 1996, the prohibition against the application of force and adherence to international humanitarian law are pertinent to all forms of weaponry, regardless of their technological underpinnings. This does not imply that AI is inherently a weapon and warrants regulation as such, as some have supposed. The vast potential of AI can be harnessed for beneficial or harmful ends. The key lies in the technology's multifaceted utility and applications, utilized by states and various other entities, thereby invoking the application of international law whenever it is relevant to the conduct under scrutiny.
While certain international regulations address particular commodities or technologies, for example, radio broadcasting, the majority of international regulations and principles are broad or adaptable enough to incorporate novel technological advancements. This involves the reinterpretation of established regulations in light of emerging societal phenomena, known as "evolutionary interpretation." As international law adapts to AI’s unique characteristics—such as its adaptability, speed, and scale—it requires significant input from diverse stakeholders and the incorporation of multidisciplinary perspectives. This broad applicability of international law ensures its resilience amidst the rapid evolution of AI.
The fact that international law applies to AI implies that an international treaty for AI isn't guaranteed. AI operates within an established legal framework, where general protections and prohibitions remain relevant. Before deciding on the necessity of a treaty, states must comprehend how the existing international legal landscape applies to AI. When weighing the benefits and costs of such a treaty, key considerations include whether the current legal framework offers sufficient coverage, whether there are gaps in safeguarding specific values or groups, particularly vulnerable individuals, whether it effectively addresses the novel challenges and risks posed by AI, and whether there is a need for more detailed provisions to strike the right balance. Additionally, is there a suitable platform for global AI discussions? How can diverse perspectives, including those from next-generation leaders, women, and the Global South, be meaningfully incorporated into AI negotiations? Crafting treaties requires substantial time and political commitment, and may easily be outdated due to rapid technological advancements. Furthermore, there is a risk that existing legal standards may be diluted to accommodate AI in the pursuit of consensus.
It is noteworthy that parallel deliberations regarding the application of international law to cyber operations are presently unfolding both within and beyond the United Nations (UN), offering valuable insights. The Open-Ended Working Group (OEWG) focusing on information and communications technologies security and use, along with the Ad Committee on Cyber Crime and the Global Digital Compact, are active entities within the UN. Outside the UN, initiatives such as the Tallinn Manuals on International Law Applicable to Cyber Operations and the Oxford Process on International Law Protections in Cyberspace have significantly contributed to delineating the parameters of international law pertaining to cyber operations.
Apart from determining the applicability of international law to AI, there exists a distinct inquiry into identifying the suitable processes, forums, or institutions for enforcing existing rules and formulating new ones. This decision lies within the purview of states at the domestic, regional, and international levels. However, States should be cognizant that various international institutions already possess mandates encompassing different AI uses or applications by both state and non-state actors, including the UN and its international bodies such as human rights commissions, as well as international courts, tribunals, and other mechanisms for dispute resolution. International law offers unilateral solutions for addressing AI-related injustices, including measures like UN Security Council sanctions and countermeasures. Additionally, should new institutions or processes be established for AI governance, states could look to existing international bodies tackling significant technological issues, such as climate change and civil aviation, for inspiration. They should also deliberate on the necessity of technical proficiency, involvement of multiple stakeholders, institutional coordination, and the identification of risks to be mitigated.
Conclusion
International law plays a pivotal role in AI governance, offering States a shared language and bolstering confidence in addressing this intricate global challenge. Existing international norms and principles are applicable to AI technologies, adaptable to diverse uses worldwide. Interpreting international law within the AI context demands collaborative efforts and expertise across various domains. Despite numerous uncertainties, the rapid advancement of AI underscores the imperative for flexible and dynamic governance, encompassing all stages of development. It is crucial for stakeholders to explore practical applications of international law collectively through established or new processes, forums, and institutions, shaping the discourse on global AI governance.