phbs
Tal Mimran:How to regulate AI-Influenced Weapons?
2023-12-26 10:25:26
phbs phbs phbs
What is the future of civilization formed by the combination of AI and modern weapons, such as Lethal Autonomous Weapons (LAW)? These weapons are systems that can select and attack targets without further human intervention once activated.   We have already seen such weapons in conflicts like the Russia-Ukraine war, where they exist in the form of drones or automated tanks.   It is concerning that there are no global regulatory standards for these AI weapon systems.  To konw how to regulate AI-Influenced Weapons, we have interviewed Dr. Tal Mimran, a renowned expert advocating against an AI arms race.

Dr. Tal Mimran is an Adjunct Lecturer at the Hebrew University of Jerusalem, the Academic Coordinator of the International Law Forum of the Hebrew University, and the Research Director coordinator at the Federmann Cyber Security Research Center in the Law Faculty of the Hebrew University. 



PKU Financial Review: How do you view the future formed by the combination of AI and modern weapons, such as Lethal Autonomous Weapons (LAW)? These weapons are systems that can select and attack targets without further human intervention once activated. We have already seen such weapons in conflicts like the Russia-Ukraine war, where they exist in the form of drones or automated tanks. It is concerning that there are no global regulatory standards for these AI weapon systems, particularly with no dialogue mechanism between major powers like the US and China. As a renowned expert advocating against an AI arms race, how do you think the current situation could be mitigated?
 
Tal Mimran: New and emerging technologies significantly impact the ways in which military operations are conducted. Advancements have been achieved in the development and deployment of autonomous weapon systems, military use of cyberspace, and more. One emerging field in which significant leaps are currently being made is Artificial Intelligence (AI) with military applications, for example in the context of lethal autonomous weapons.
In recent years, it seems that this trend is gaining strength, for example in the Russia-Ukraine war, or with the introduction of AI based tools. Such developments spread over several areas: selection of targets for attack, intelligence analysis, proactive forecasting and threat alerting, aid in streamlined command and control and more. For example, there is the 'Fire Factory' system deployed by the Israel Defense Forces, an AI-driven system that plays a pivotal role in the identification of possible aerial assault objectives (military targets), which is deployed alongside other tools like the 'Gospel' System that helps to improve intelligence recommendations and identify key military targets.

As per mitigating steps, an essential concern revolves around the extent of human participation necessary in the decision-making processes. In my view, when considering the utilization of AI, both for offensive and defensive purposes, it's crucial to guarantee that in every AI-driven operation, a human presence remains integral to the process and that a human will be in the loop, for three reasons: First, human participation enhances decision-making precision and quality, and it can serve as a vital safeguard for the prevention or minimization of errors. Second, the inclusion of a human in the decision-making process can bolster the legitimacy of the decision and enhance public trust. Third, when AI systems lead to making an incorrect decision that results in a violation of international law, the presence of a human factor becomes crucial in terms of accountability.

The problem, though, is that with AI – human oversight might not be sufficient to deal with the challenge. This is since a main problem with AI based systems stems from the fact that there is an inherent challenge in the sense of the system’s ability to explain the decision-making process. This phenomenon is commonly referred to as the "black-box" effect. This “glitch” poses a problem for those advocating the deployment of AI in the battlefield, especially for offensive purposes, since it prevents any possibility of oversight or justification (particularly in a context which requires constant evaluation of legality given the interests at risk). Also, while AI allows military systems to process vast amounts of data quickly, analyse complex scenarios, and execute actions almost instantly, this accelerated decision-making capability also raises ethical and operational concerns, as it necessitates a delicate balance between automation and human oversight to ensure responsible and ethical decision-making in the context of warfare.

Putting aside the technical and operational challenges, the biggest challenge in my view is actually on a broader level. The world is becoming more divided in ideals and values, and there is a difficulty in promoting international responses. How can the United States and Russia, or the Netherlands and China, decide on measures against hostile cyber operations – if they cannot agree on a common definition of such an act? In the context of AI – we have the Chinese State-driven approach, the European Rights-driven approach, and the US market-driven approach, and we need to wait and see which of the three will gain traction and impact the direction in which we are headed.

In sum, it seems that the most significant steps that are taken currently are either domestic or regional. One important step that should be adopted by all, is the establishment of a preliminary measure of evaluating the legality of new technologies in the study, development, acquisition or adoption of a new weapon or new means or methods of warfare, as required by article 36 of the First Additional Protocol to the Geneva Conventions from 1977 (API). In other words, States are required to use prophylactic impact assessment measures, like legality review of weapons and also of means and methods of warfare.

While not all States are members of API, General Comment 36 of the Human Rights Committee took the approach that ensuring protection of the right to life invites prophylactic impact assessment measures, including legality review for new weapons, and, in practice, some States have resorted to review procedures without being members of the protocol (the United States is a prominent example for such a State). The importance, as well the challenges, of conducting proper legal reviews increases with new technologies with unclear impacts on civilians, and this is definitely true in the case of AI driven tools.

PKU Financial Review: With a significant number of AI weapons entering the battlefield, one ethical consequence is that they reduce the need for extensive military training and lower the direct risks to military personnel.  For example, losing an AI-controlled aircraft doesn't entail losing a pilot's life, which makes the war develop in the direction of "game-like" strikes by strong countries against weak countries.  Will it affect the international rules of war?

Tal Mimran: There is no doubt about it – asserting AI-superiority in the battlefield has tremendous value in terms of deterrence. It is also obvious that there are advantages for putting into use AI-based tools in order to improve existing military capabilities. However, what role do moral and legal considerations have in this trend? Particularly, are there currently any legal limitations on the desire to introduce such new tools in practice, and on their actual use in practice? Also, how should States approach both the political discussion over the deployment of AI-based tools into warfare?

While we have seen in recent years that more and more States have articulated their legal positions on how international law applies in cyberspace, either unilaterally or in forums like the GGE and the Open-Ended Working Group, there are deep disagreements that are rooted in the different perspectives and values of States. States like Israel, which perceive themselves as part of the group which leads not only production of technology, but also political discussion over its regulation, are satisfied with the current state of affairs, while States like Iran seek to redistribute the digital cake in terms of inclusiveness and equality. This is since Iran views the existing situation as Western monopolization of the internet, and more broadly of technology. Such disagreements manifest also in specific legal questions, for example the content of the principle of non-intervention in the cyber realm or interpretation of terms like an armed attack.

If, and once, ideological gaps are minimized, a main question will be if a regime should be binding or soft. In addition, the orientation of the regime must be clear – and in my view a human rights perspective is the way to go, in line with EU perspective on dual-use technology, rather than a security or economic one (as seems to underlie the Wassenaar Arrangement and the approaches of States like the United States and China).

Another issue is the role of non-State actors. I believe that non-State actors, such as IGO’s and the private market, must be part of the development of norms. IGO’s should be a platform for cooperation, consultation, and technical assistance, while the private sector should provide practical inputs, and share challenges from their perspective (e.g. problems caused by export regimes). In this day and age, one cannot regulate the technological sphere while ignoring companies. This is true for export controls, and for other issues like regulation of social networks, and the war in Ukraine highlights how the private sector can become central. Yet, we should avoid over-privatization and fragmentation of authority and responsibility.

How will all of the above change intentional law? Given the difficulty in promoting new rules in such a polarized reality, I doubt that there will be changes in the near future (unless something dramatic will take place, like a disaster that derives from AI based tools). There are some considerations that can be mentioned at this point of time.

First, we need to consider if and which type of an AI enabled military cyber capability should be banned altogether. The bar is quite high, as there are only a limited group of weapons which are completely prohibited – most notably biological, chemical, and nuclear (notwithstanding the ICJ’s exception – when the survival of a State is at stake). Such weapons are banned since their impact is too severe, even cruel and inhumane, and they fail to distinguish between combatants and civilians. While cyber tools indeed fail to discriminate, their impact – compared to banned weapons, is less severe, some called for a complete ban of such cyber tools, like the former special rapporteur David Kaye. Similar voices begin to arise in the context of AI based systems.

Second, there can also be steps in the field of restraint in design. Some mechanisms can be put in place, like the concept of privacy by design. Another important mechanism is the one prescribed under article 36 of API. Another very plausible option is that of trade restrictions, in this regard I will flag out some issues. First, a main issue is the impartiality of the regulating body, as States are at times the client and at other times the regulator. Second, the main tool in this regard – the Wassenaar arrangement is not an obligating treaty, and it allows for significant consideration of domestic interests. Third, there is the matter of broad definitions which inhibit the private market, or the flip side, under-encompassing ones. In this regard, international law aspires for proximity to reality, as can be seen from principles like ex factis jus oritor, but this might prove especially challenging in the context of AI.

Notwithstanding the above, and instead of developing new norms we can focus on existing ones – particularly IHRL, IHL, ICL, and of course general principles of international law (sovereignty, non-intervention). While norms exist in abundance, enforcement mechanisms are more limited in terms of jurisdiction, and enforcement. In sum, there needs to be some mix of tools in different stages (planning, design, deployment, and retroactive examination). Domestic and international systems should aspire for harmonization, and complementarity, in order to better safeguard from the risks of AI with a military application.

PKU Financial Review: Many people will imagine a scenario: the best "playmate" for human children in the future is a child robot similar to chatGPT. The best partner for future soldiers is ground military robots or air military robots. The level of interaction between soldiers and military robots determines their effectiveness on the battlefield. So, will it affect future rules of warfare? For instance, if human soldiers choose not to eliminate adversaries and show mercy, would military robots act differently?

Tal Mimran: I absolutely agree, and I think that not only will soldiers fight alongside robots, but we will see an ever-growing connection between man and machine. This trend derives from the desire to maximize the potential of the human soldier, by finding possible ways to enhance his/her capabilities while downgrading the less desired features of being human (hunger, fear, tiredness and more).

The idea of human enhancement has long been a source of inspiration for popular culture depictions, but it is also the subject of contemporary scientific research aimed at restoring full functionalities to ill or disabled persons or at conferring on “enhanced humans” super-human capabilities. In the military context, enhanced combatants might obtain heightened capabilities by wearing, and in some cases embedding in their bodies, integrated technology that improves their organic and natural functions (e.g., additional strength, reduced need for sleep, improved vision and better decision-making capacity etc.). Arguably, human enhancement programs do not necessarily run contrary to IHL as it can reduce operational mistakes during hostilities, and as a result reduce harm to civilians and civilian objects.

It is common to divide human enhancement into three main categories: biochemical, cybernetic and prosthetic.  Biochemical enhancement entails the use of pharmaceutical agents to enhance physical and mental functions. Cybernetic enhancement, or brain-machine interface, involve technologies that aim to connect electric signals produced by the human brain directly to a machine without the need for manual input.  Examples include the Avatar project in the United States (US), that develops interfaces and algorithms that will allow a soldier to partner-up with a semi-autonomous bipedal machine, and the N3 program, aimed at broadening the applicability of neural interfaces to warfighters.  Prosthetic enhancement involves physical improvements for humans, including prosthetics capable of providing sensory feedback and thought-controlled movement, visual prosthetics that allow for augmented or restored vision, and auditory enhancement. The US Defense Advanced Research Projects Agency (DARPA) is a principal engine for the development of human enhancement projects directed at improving the capabilities of soldiers, and has invested a significant amount of resources in promoting relevant research projects.

This trend might prove itself to be a major challenge, at least in terms of human rights, and especially privacy implications. The embedding of digital technology in human bodies may entail a dramatic invasion of the private sphere (even if consented to by the soldier in question), a change in personal identity and a dire threat to personal autonomy given the possibility for manipulating bodily functions, including brain activities.

Broadly speaking, the inter-connection between man and machine, and the reliance on “partners” of the sorts of robots, could have devastating impacts on civilians. Technology has always been part of warfare, since the days of sticks and stones (that can also be considered as technological tools of their period), and there is a silver-lining to this trend: technology increases fire-power, accelerates the rhythm of the hostilities and its volume, and is aimed at distancing the soldier from the battlefield. This trend is about to reach a climax in this digital age, which might be exciting news for soldiers and commanders, but not for uninvolved civilians. It is up to the academia, and the civil society, to maintain a moral compass and a spotlight on the way that States conduct themselves in the battlefield, and to direct the latter to invest more in AI for the betterment of mankind and society (rather than for military purposes). I am nor naïve, but I am definitely trying to remain optimistic on our ability to impact life for the better.