This HuffPost Canada page is maintained as part of an online archive.

The Morality of Drone Attacks

Human beings are attempting to codify the set of ideal characteristics, virtues, moral commands, moral dispositions and even the capacity of judgment into a set of algorithms to then impart to a machine. However, for thousands of years, humanity has attempted to create good people, without much success.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The U.S. military killed al-Qaeda propagandist Anwar al-Awlaki in a drone strike in the deserts of Yemen. While much of the American public discourse has focused on the dilemma of whether al-Awlaki's U.S. citizenship should have entitled him to due process of law before the assassination, there is another ethical debate that needs to be raised: Where are drone attacks leading the future conduct of war?

There is general consensus behind the effectiveness of drones as weapons of war -- they are cheap, accurate and avoid having to put boots on the ground. But the U.S. government is attempting to further increase automation in warfare by pursuing the creation of lethal autonomous robots (LARS). This pursuit of full artificial intelligence and autonomy for weapons is thus leading some researchers and academics to ask whether it is possible to make such automated robots act ethically. Even Canada's Department of National Defence has claimed that it is interested in acquiring such technology, assuming the "right balance" can be struck between enhanced robotic autonomy, potential costs and issues of legality and morality.

Much attention is starting to be paid to the question of whether computer programmers, software designers and engineers can build a fully autonomous robot; one that is artificially intelligent and can make moral judgments, as well as act in accordance with the rules of war without a human being's directives. These digital pioneers believe that they will be able to do so by successfully programming grand moral theories alongside the laws of armed conflict and rules of engagement.

Unfortunately, this seems at best hubristic and at worst dangerous. It is hubristic to think humans can program machines to act ethically when we cannot program ourselves to do so consistently and it would be dangerous to unleash weapons that cannot, ultimately, be controlled.

The consensus among roboticists seems to be that the most desired programming tactic for future LARS is to take mixture of "top-down" and "bottom-up" programming. Top-down programming, crudely, consists of designing a set of algorithms that match commands, for instance, "never harm non-combatants." Contrarily, bottom-up software design allows artificial agents to learn through experience. The robot works through a set of rewards, seeks patterns, and thus learns.

Both approaches, roboticists agree, have their limitations. One limitation for top-down programming is that software designers would have to foresee every possible situation and code an algorithm accordingly. Another is that such rigid systems might encounter conflicting commands and leave the machine without a clear directive. Bottom-up approaches also have their worries. Mainly, it is that, just like human beings, robots can "go bad." If an artificial agent learns that a particular behaviour can pay off, then it might continue to act in this way, even if the action violated laws of war, morality or simply common sense. It is impossible to predict how an artificial intelligence would perceive the world we leave in, especially so in a combat situation.

The solution, the experts feel, is a hybrid of both techniques: The machines will be allowed to learn, but they will also have programmed commands to forbid them from certain actions.

A recent report compiled for the U.S. Office of Naval Research argues that this hybrid approach should take on the contours of virtue ethics, where ethics in any given situation are defined as dependent upon the morality of the active individual. LARS should be programmed to have "moral character," and in particular the "ideal character traits of a warfighter." This raises ethical issues for the human programmers as well -- would a human-designed robot, with moral agency, be a slave if programmed to only act within a narrow range of options?

Human beings are attempting to codify the set of ideal characteristics, virtues, moral commands, moral dispositions and even the capacity of judgment into a set of algorithms to then impart to a machine. However, for thousands of years, humanity has not only questioned what morality requires, but also correspondingly attempted to create good people, without much success.

In the worst case, the push to create fully autonomous lethal robots and to impart these weapons with artificial morality flirts with an unknown danger. Aside from the Hollywood-type fears, profitably shown in television programs such as Battlestar Galactica or films such as Terminator, that these deadly robots will threaten the human race with annihilation, there is a serious worry designing ethical, and effective, software programs for these robots will prove impossible.

Free will involves, even at its most basic level, more than one option available for the agent, be they human or robotic. If governments truly want to place machines in human soldiers' stead on the battlefield, then there is no guarantee that LARS will indeed act ethically, despite assurances that the laws of war will be enshrined as commands.

The ethical debate about morality, law, and war is a concern of jurists, ethicists and the public at large. Anwar al-Awlaki is just the beginning.

This piece was co-authored with Heather Roff. It first appeared in the Full Comment section of The National Post.

Heather Roff is an assistant professor at the University of Waterloo. Bessma Momani is a senior fellow at The Centre for International Governance Innovation and an associate professor at the University of Waterloo.

Close
This HuffPost Canada page is maintained as part of an online archive. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.