Skip to main content

The Damaging Potential of Biased Automated Weapon Systems: A Call for the Regulation of the Development Phase in Military Uses of AI

by Sofia Casagrande

It has been 66 years since the term ‘artificial intelligence’ (AI) was first coined by Professor John McCarthy[1], a Dartmouth researcher who, along with a group of fellow academics and scientists, proposed a summer research project on the subject[2], and to say we have come far ever since is an understatement. From the AI-powered algorithms that sort spam e-mails out of our inbox folders[3] to the sophisticated neural networks that can assess the metastasis risk of skin cancer as accurately as a dermatologist would[4], AI has become ubiquitous in our day to day lives.

Just like several other scientific breakthroughs that preceded it in history, AI is on the verge of changing the way armed conflict is conducted by automating certain military tasks, a development that concerns many in the international sphere.

When we think of automation in warfare, it may be easy for our mind to stray to extreme scenarios where rifle-bearing robots strike terror in the battlefield, but the reality is that armies are working on AI and machine learning solutions that could revolutionise how they operate in ways that are less sensational, yet worthy of great caution.

In a report that originated from the discussions held at a roundtable meeting with AI researchers on June 2018, the International Committee of the Red Cross (ICRC) defined autonomous weapon system as “any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention.”[5]. The lack of human intervention is determined by the fact that, once activated, autonomous weapon systems use their sensors, software, and connected weaponry to identify and attack targets autonomously[6]: therefore, the human operator does not have awareness of the target attacked, nor the timing and location of the attack, as opposed to what happens with remotely controlled weapon systems, where the target, location and time of attack are chosen by a human user[7]. The lack of human involvement in these decisions is a main point of concern for the ICRC[8], and it informs its human-centred agenda regarding the use and development of AI in warfare[9].

It is very relevant to this essay’s discussion that remotely controlled weapons, as the ICRC’s report points out, could easily “become tomorrow’s autonomous weapons with just a software upgrade”[10]: subsequently, “(…) the question is not so much whether we will see more weaponised robots, but whether and by what means they will remain under human control.”[11].


Keep reading and download the essay here.

[1] Stanford University, Professor John McCarthy: Father of AI,, (accessed on 27.02.2021).

[2] J. McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, in: AI Magazine 27 (4) (2006),, p. 12.

[3] Information Age, Can Artificial Intelligence Spot Spam Quicker Than Humans?,, (accessed on 27/02/2021).

[4] ScienceDaily, Algorithm that performs as accurately as dermatologists,, (accessed on 27.02.2021).

[5] International Committee of the Red Cross, Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control,, (accessed on 24.02.2021), p. 5.

[6] Ibid.

[7] Ibid.

[8] International Committee of the Red Cross, Artificial Intelligence and Machine Learning in Armed Conflict: a Human-Centered Approach,, (accessed on 24.02.2021), p. 5.

[9] Id., p. 7.

[10] International Committee of the Red Cross, supra note 5, p. 6.

[11] Ibid.