15 April 2019

Anthony Pfaff, Military Ethic expert, invited at the MAS Conference: ”The moral peril of proxy war”

Mihai Draghici

Conflicts through proxies are generating major moral concerns, and military systems’ development with artificial intelligence must follow the ethical standards, says Anthony Pfaff, specialist in Military Profession and Ethic and one of MAS Conference’s guests to take place Tuesday.

Image source: Mediafax

“Proxy conflicts are complicated in ways that are not fully accounted for by standard moral frameworks”, says, in an interview, Anthony Pfaff, reasearch professor for Military Profession and Ethic. „It’s not an accident that U.S. support for Saudi Arabia’s war in Yemen has been a humanitarian disaster.(…) Proxy conflicts are complicated in ways that are not fully accounted for by standard moral frameworks”, says Anthony Pfaff in an interview published with Patrick Granfield in Foreign Policy, April 2019, called “The moral peril of proxy war”.

“The ethical problems associated with lethal autonomous weapons are not going to go away as the development, acquisition, and employment of artificially intelligent systems challenge the traditional norms associated not just with warfighting but morality in general.[4] Among the many concerns associated with developing lethal autonomous weapon systems driven by artificial intelligence is that they will dehumanize warfare.[5] On the surface this seems like an odd case to make. War may be a human activity, but rarely does it feel to those involved like a particularly humane activity, bringing out the worst in humans more often than it brings out the best. Moreover, lethal autonomous weapons and decision support systems are often not only more precise than their human counterparts, they do not suffer from emotions such as anger, revenge, frustration, and others that give rise to war crimes. So, if these systems can reduce some of the cruelty and pain war inevitably brings, then it is reasonable to question whether dehumanizing war is really a bad thing”, says Anthony Pfaff.

“What this analysis has shown is the arguments for considering military artificial-intelligence systems, even fully autonomous ones, mala en se are on shakier ground than those that permit their use. It is possible to demonstrate respect for persons even in cases where the machine is making all the decisions. This point suggests that it is possible to align effective development of artificial-intelligence systems with our moral commitments and conform to the war convention. Thus, calls to eliminate or strictly reduce the employment of such weapons are off base. If done right, the development and employment of such weapons can better deter war or, failing that, reduce the harms caused by war. If done wrong, however, these same weapons can encourage militaristic responses when other non-violent alternatives are available, resulting in atrocities for which no one is accountable and desensitizing soldiers to the killing they do. Doing it right means applying respect for persons not just when employing such systems but also at all phases of the design and acquisition process to ensure their capabilities improve our ability not just to reduce risk but also to demonstrate compassion”, concluded Anthony Pfaff.

Dr. Anthony Pfaff is currently the research professor for Military Profession and Ethic at the Strategic Studies Institute (SSI), U.S. Army War College, Carlisle, PA. A retired Army colonel and Foreign Area Officer (FAO) for the Middle East and North Africa, Dr. Pfaff recently served as Director for Iraq on the National Security Council Staff.

Anthony Pfaff is invited at the conference, called “Transatlantic Security Bridges Over Increasing Security Gaps Vision - Romania's perspective”, and organized by Defence and Security Monitor, on Tuesday, 16th of April.

The event can be watched live on https://monitorulapararii.ro/www.mediafax.rowww.gandul.info and the Facebook pages of the mentioned products.