India’s review of AI-based weapon systems, while a good step, would need to address various legal and ethical conundrums
By Vikas Gupta
Defence News of India, 1 April 22
The intellectual roots of artificial intelligence (AI) can be traced back to Greek mythology, but the term only became part of popular discourse after sci-fi (sci-fi) movies, such as “The Terminatorgave audiences a fictional glimpse into the fight between AI beings and humans. An example of an autonomous weapon in use today is the Israeli harpy dronewhich is programmed to fly to a particular area, hunt specific targets, and then destroy them using a high-explosive warhead dubbed “Fire and Forget”.
In its simplest form, AI is a field of computing that allows computers and machines to perform intelligent tasks by mimicking human behavior and actions. Most of us encounter some form of AI systems on a daily basis, such as music streaming services, voice recognition, and personal assistants such as Siri or Alexa. In 1950, in an article entitled “Computing Machinery and Intelligence”, Alan Turing examined the question “Can machines think?” And in 1956 John McCarthy first coined the term artificial intelligence.
In July 2015, at the “International Joint Conferences on Artificial Intelligence (IJCAI)” in Buenos Aires, researchers warned in an open letter of the dangers of an arms race against AI and called for “the prohibition of offensive autonomous weapons beyond meaningful human control”. This letter was signed by more than 4,500 AI/robotics researchers and some 26,215 people, including prominent figures in the fields of physics, engineering and technological innovation.
Despite this concern, global powers such as China, Russia, the United States and India are competing to develop AI-based weapons. At the 2018 UN Conventional Weapons Convention (CCW) summit, the United States, Russia, South Korea, Israel and Australia opposed talks aimed at “bringing negotiations to fully autonomous AI-powered weapons on a formal level that could lead to a treaty banning them”. an unpublished article, “Artificial Intelligence and the Armed Forces: Legal and Ethical Concerns,” from which this newspaper column is based.
Sharma and Rautdesai consider AI to be broadly of two types: narrow AI, which performs specific tasks such as music, shopping recommendations, medical diagnosis, etc. Then there’s general AI, which is a system “that exhibits seemingly intelligent behavior at least as advanced as a person across the full range of cognitive tasks.” The broad consensus is that general AI is still in few decades. However, there is no formal definition, since the word “intelligence” is, in itself, difficult to define.
As AI is adopted in everyday life, especially in the armed forces, many legal concerns are likely to arise, chief among which is its regulation. However, government and policy makers would need a clear definition of AI before even attempting to regulate it. In August 2017, the Department of Trade and Industry set up an AI Task Force (AITF) to “explore opportunities to leverage AI for development in various fields”. The AITF submitted its report in March 2018. In its recommendations to the Indian government, the AITF is largely silent on the various legal issues that should be addressed.
One of the biggest and most interesting uses of AI is in military operations. There are potentially huge benefits for the military to harness AI for tactical advantages, especially in big data analysis – where large volumes of data must be collected, analyzed and disseminated across multiple fronts during a war. . Of equal, if not greater, interest is the use of autonomous weapons. AI-based analytics are not lethal on their own and are just simple tools for humans to make decisions. Rebecca Crootof of Yale Law School defined an autonomous weapon system as “a weapon system that, based on inferences drawn from gathered information and pre-programmed constraints, is capable of independently selecting and engaging targets “. When human intervention is required before any action, the system would be considered “semi-autonomous”.
Although there is no agreed definition of “automated weapon system” in the international context, the International Committee of the Red Cross (ICRC) in its report prepared for the 32nd International Conference of the Red Cross and the Red Crescent in Geneva, Switzerland, in December 2015, proposed that “autonomous weapon systems” be considered:
“A generic term that would encompass any type of weapon system, whether operating in the air, on land or at sea, with autonomy in its “critical functions”, i.e. a weapon that can select (i.e. seek or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.
Technologically advanced armies already have significant capabilities in AI-based weapon systems; and they are making additional efforts to research and develop automated weapon systems. The United States is investing heavily in intelligent weapons systems, which include computers capable of “explaining their decisions to military commanders”. Such systems, which are currently science fiction, could soon become a reality.
India is no exception to the growing interest in deploying AI-based weapon systems for the military. In February 2018, the Ministry of Defense (MoD) set up a task force to study the use and feasibility of AI in the Indian Army. The contents of the task force’s report, which was delivered to the Department of Defense on June 30, 2018, remains confidential, but the accompanying press release states that the report, among other things, “made recommendations for policy and institutional interventions needed to regulate and encourage robust AI-based technologies for the defense sector in the country.
India’s review of AI-based weapon systems is a step in the right direction given our hostile neighbors and our particular Naxalism problem. However, due consideration should be given to the various legal and ethical conundrums that India would face if the use and deployment of these systems were not well regulated.
These types of AI automated weapon systems – also called “killer robots” – that could pose significant threats are known as lethal automated weapons systems (LAWS). They are designed to require no human intervention once activated. They pose difficult legal and ethical challenges since it would be a machine that actually makes the decision to kill or engage targets.
At the international level, in 2013, during the “Meeting of States Parties to the Convention on Certain Conventional Weapons” (CCW), it was decided that an informal meeting of experts would be held in 2014 to discuss issues relating to LAWS . India’s position in the various meetings held since 2014 has been that these weapons systems “meet the standards of international humanitarian law, systemic controls on international armed conflict that do not widen the technological gap, or that their use is isolated from the dictates of the public conscience”. .”
Arguments for the use of AI-based weapons range from accessibility to remote areas to reducing casualties among soldiers and non-combatants. On the other hand, the objection to the use of AI-based weapons, especially autonomous systems, is that it would be easier for countries to engage in warmongering and civilian and collateral casualties could be much more important.
Mr. Sharma and Dr. Rautdesai conclude by arguing that both sides of the arguments have their merits, it is futile to compare credibility. Suffice it to say that a plethora of legal and ethical issues arise when a country has to deploy an AI-based weapons system, especially ones like LAWS.