The Troubling Ethics of AI as a Force Multiplier

By Shravan Krishnan Sharma

An artificial intelligence lightweight android during a 2013 demonstration.
Image Credit: REUTERS/Fabrizio Bensch

Realist Review is pleased to provide this article courtesy of the

SciencesPo Cybersecurity Association

Napoleon has been credited with saying “he who is on the side of technology, wins.” 

This is a lesson that military planners have taken to heart, time and time again, seeking to find the best weapons of war that they can use to unleash death and damnation upon their enemies. The progression of this concept has taken militaries from strength to strength, as they have adapted force structures to integrate everything from bows and arrows to hypersonic cruise missiles

Technology will continue to be a reckoning force in the foreseeable future, most importantly as a force multiplier. “Force multiplication” is a concept that arose during the Cold War – essentially the targeted use of battlefield technologies to multiply the effect of a given number of combatant troops. 

What this implies for a battlefield commander is that they have the capability to deploy effective and expendable technology alongside a smaller number of troops and still achieve the effect of deploying a larger number of troops. 

As the world pivots towards the use of artificial intelligence and autonomous weapons technologies on the battlefield, those in the business of strategic studies are left pondering the future of force multiplication given the troubling legal and ethical issues surrounding these systems. 

While autonomous fighting systems are making their debut across land, air and sea, the most interesting perhaps, lies in the air. 

Unmanned aerial vehicles (UAV) are not a new phenomenon. Militaries have fielded the technology since at least the end of the Cold War, and the platform became infamous during the War on Terror as they took on more offensive roles, carrying out airstrikes instead of their traditional C4ISTAR (Command, Control, Communications, Computers, Intelligence, Surveillance, Target Acquisition and Reconnaissance) roles. 

The move away from using what should have been a tool as a weapon raised a number of questions regarding their ethicality, specifically the removal of the human element in combat, ie, disconnecting pilots from their targets. But there is no denying that UAVs are effective. 

They have a long loiter time, they are small, easily deployable, relatively easy to maintain, and far more expendable than a multi-million-dollar fighter aircraft with a human life strapped inside of it. It was therefore no surprise that militaries would make the move towards autonomous aerial systems, to completely eliminate the human element in order to maximize efficiency. 

There are two major examples of autonomous aircraft platforms currently in service in the Western world: the Northrop Grumman X-47B UCAS and the Boeing Australia Airpower Teaming System (aka “Loyal Wingman”)

The X-47B is a carrier based autonomous strike aircraft, capable of carrying out strike missions independently, taking only mission parameter orders from its command station. 

The Loyal Wingman is designed along similar lines but operates alongside conventional fighter aircraft as a “loyal wingman” that takes orders from an aircraft. It’s used to “pave the way” into a combat zone for a flight complement. 

Both aircraft are, in simple terms, advanced killer robots whose only interest is to serve the mission and protect those designated as their friendlies. Proponents argue that the employment of such platforms allows militaries to field larger complements of pilots and increase their fleet sizes while increasing mission capabilities by a greater proportion (force multiplication). Nobody would dispute this. 

Anyone with even an inkling of understanding of basic resource allocation economics will be able to determine the absolute operational benefits of employing these systems. 

From a strategic standpoint, the deployment of a cold, emotionless, fast-calculating computer armed with Hellfire missiles into a combat zone with a clear mission parameter to send enemy combatants to kingdom come is the ultimate Clausewitzian dream. 

Coupled with the ability to linger on the battlefield longer and do the force security job that would otherwise take a whole other unit to fulfill, autonomous unmanned combat platforms like the Loyal Wingman and the X-47B are without a doubt, the future of the warfighting business. 

This author’s concern, however, is ethical. It is often said that in statecraft there must be a willingness to do objectively unethical things in service of a larger goal. The oft misattributed quote that “good people sleep soundly because rough men stand ready to do violence on their behalf” is a ringing testament to this. 

War is fundamentally the business of killing and is endemic to the human condition. However, as humanity has evolved, it has developed rules of war, and ethics around the conduct of the killing machine. 

Those rules are to be taken seriously as a recognition of the common humanity of combatants on both sides, and nations hold those who fail to fight by those rules to account. Pilots, soldiers, sailors, marines – these are all people. They are human beings and as much as they are flawed, and capable of making errors of judgment, they can be held to account when they commit errors or wilfully ignore the rules. 

Any machine, even the most intelligent, can be programmed to make objective value judgments, but a machine is not inherently a conscious or thinking being. 

Asimov’s three laws of robotics maintain that a machine should not harm humans under any circumstances, but that it must obey the orders of humans except when it puts other humans in harm’s way and protect itself except when it comes into conflict with the previous two laws. 

How then can an artificially intelligent machine be programmed to kill? Would it be ethical for it to protect friendly humans by harming other humans who would be considered unfriendly? Does an autonomously machine have the base intuition to accept an enemy combatant’s surrender? Asimov may have written science fiction, but the dilemmas posed with respect to autonomous machines are relevant in the real world. 

Humanity debates questions of ethics and morality because it is self-aware of the idea that its actions have consequences. Does a machine possess similar capabilities? 

What does it say about the principles of a country that chooses to kill not even by having a human minimally seated at a screen pull the trigger, but by leaving the decision to lines of code, a program designed to make value judgments?

What if that machine is wrong, based on incorrect information that the entire intelligence apparatus behind it has produced? Can an autonomous strike aircraft be court-martialed? Who becomes responsible for that mistaken strike? The ethical quandaries are numerous, and to condense them into a singular piece of writing would be reductive at the very least. 

I am a military man myself, and I would welcome with open arms any technology that makes my team that much more effective, that much more lethal, and that much safer on the battlefield. If fewer lives need to be risked to achieve the same effect, the objective value judgment would be to absolutely adopt the technology that does so. 

In that respect, autonomous weapons are a no-brainer – they cost less, require one less (at least) life to be risked, are expendable, and have greater mission endurance than even the best trained of humans. There is no doubt that as force multipliers, from the perspective of securing objectives and mission success, autonomous weapons platforms are the way forward. 

But war is still fundamentally a human activity, requiring human judgment and human intuition. If society is already plagued with the ethical concerns of fighting a war from behind a computer screen, even with a human pulling the trigger, can it really accept a line of code doing the same?

View the original publication of this article and learn more about SCA here.

Shravan Krishnan Sharma is a graduate student at Sciences Po’s Paris School of International Affairs, pursuing a Master’s in International Security with concentrations in Intelligence and Asian Studies. He is a regular contributor for the Realist Review.






Leave a Reply

%d bloggers like this: