25 November 2019

The killer robots ethic is up for consideration within the American community interested in the artificial intelligence field

Niculae Iancu

The US defence innovation board has published, at the beginning of the month, a set of guidelines that should be closely followed by the defence agencies interested in developing military capabilities based on artificial intelligence and by the force structures responsible with using them in the battlefield. The defence innovation board is an independent federal advisory committee, composed of experts in artificial intelligence and complementary fields, working in the industry, the academic environment and non-governmental organizations, aiming at supporting military leaders’ decisions with studies and analyses in these specific domains. The published principles are not mandatory for the Pentagon, but are offering a common denominator for all US defence agencies in approaching such a sensitive topic like producing and using “killer robots” in the battlefield. These recommendations are only approaching the innovative ethic aspects in using artificial intelligence in defence businesses, uncovered by the existent legal and institutional framework for this issue’s management.

Image source: Mediafax

Five ethical guidelines for using artificial intelligence in defence businesses

The process for establishing the set of ethical principles to be followed by the Defence Department in developing and operating super-intelligence weapons based on artificial intelligence was initiated in the midst of last year, given Pentagon’s concerns on US’s global ascendant in advanced technologies with dual utilization and, mostly, in artificial intelligence, which is the new “super powers’ global competition” battlefield. Also, at the beginning of the year, president Donald Trump was adopting the Executive Order for keeping the American supremacy in artificial intelligence field. On this occasion, the defence press officer was stating, when presenting the new strategy, along with general-lieutenant Jack Shanahan, the first director of the Joint Artificial Intelligence Center of the Defence Department, that the “The executive order is paramount for our country to remain the leader in AI and will not only increase the prosperity but also enhance our national security”, as I was noticing in my analysis at that time.   

During the last year and a half, the defence innovation board has developed a complex analysis process on the ethic of artificial intelligence’s military applications’ ethic, with the support of the industry, the academic environment and the private sector. According to the Defence Department, “the board also led multiple public listening sessions, interviewed more than 100 stakeholders and held monthly meetings of an informal DOD working group in which representatives of partner nations also participated. The board also conducted two practical exercises with leaders and subject matter experts from DOD, the intelligence community and academia”. The entire process was developed under the aegis of the National Defence Strategy, adopted by President Trump in 2018, wherein artificial intelligence stays among the technologies that could “change war’s character” and is regarded as top priority for the investments in new capabilities’ research and development.

At the end of the process, five principles were established:

1. Responsibility. The human factor should give the proper level of judgment and stay responsible for the development, dislocation, utilization and exploitation of Defence Department’s artificial intelligence systems.

The topic of human control on super-intelligent weapons’ behavior is probably the hottest topic of current debates on war’s future. The accelerated technological developments and the strong competition for defence’s hyper-technologized space’s domination can push the responsible factor towards accepting bigger and bigger compromises on artificial intelligence’s autonomy level when making critical decisions on the battlefield. Therefore, “responsibility” should not just be a recommendation, but it should be regarded as an international right promoted through international treaties, dedicated to technologies’ non-proliferation, which could totally stay out of human control.

2. Equitability. DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to persons.

Discrimination when making artificial intelligence decisions could come from how intelligent weapons are planned to recognize the “enemies” and the “friends” on the battlefield. In fact, the topic is nothing new, and conflict’s history is full of examples of interpretation errors of fighters’ status, which led to tragedies that we can find today in the book of learned lessons for the great military powers’ armies. Consequently, the logic of super-technologized equipment programming codes will have to prevent the wrong interpretation or the possible dysfunctionalities which could harm the military men or the civilians or accidentally destroy the equipment.

3. Traceability. DOD's AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.

Concretely, the advanced systems that ensure quality cover the major transparency and control demands of the processes included in complex systems’ engineering. However, the innovation board thought it is important to put the general engineering discipline rules as ethical principle, most likely to prevent an improvisation in developing and operating super-intelligent weaponry systems. Furthermore, the principle wants to impose a certain cogency in using sensitive information inside the artificial intelligence eco-system for defence, even if the informative self-protection instinct should be already implemented in national defence and security workers’ gene.

4. Reliability. DOD AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.

The “reliability” principle governs the entire American institutional architecture, responsible with the research, development, production, procurement and operation of armament systems. In fact, the Defence Department has developed the most advanced organizational framework for the effective management of ensuring the American global technological supremacy, essential for preserving US’s global super-military power position. Therefore, such ethical principles comes from the American tradition of defence procurement’s management process, which inspired most of the similar systems adopted by NATO member states.

5. Governance. DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.

The “control” principle strengthens the ethical principles set in using artificial intelligence, focusing on the importance of keeping human intervention possibility in different fight operation phases, in order to avoid the unintended or inappropriate escalation of the destructive force against the enemy.  Also, control will be necessary for avoiding the retaliation of targeted destructive effects and the unintended affectation of their own operators’ life.

Recommendations for ethic principles’ implementation

The defence innovation board has also established a set of recommendations to support the implementation within the Defence Department of the mentioned principles, even if the responsibility is on military officials’ hands. In fact, general-lieutenant Jack Shanahan has expressed his will for “the recommendations will set the standard for the responsible and ethical use of such tools (based on artificial intelligence)”.

These recommendations are related and reflected in all military institution’s organizational dimensions, starting from the decisional level and the command chain, up to research, development, production, education and training. The accent here goes on the importance of connecting all engineering phases within the life cycle of military systems to ethical principles, by imposing performance standards to give firm guarantees that future developments will be under human control. In fact, the entire actions and responsibility architecture proposed by the innovation board should be coordinated by a strategic director committee, under the direct coordination of the Defence Department chief. This committee should be responsible for the surveillance of appropriateness in terms of the stimulated developments of Pentagon’s strategy objectives on artificial intelligence to the five mentioned ethic principles. Also, for the defence innovation board, process’s transparency will be provided by the annual debate of the main ethical challenges emerged in artificial intelligence domains, within an international conference.

Conclusions

Artificial intelligence can turn into the most important technology in humankind’s history. Artificial intelligence algorithms are intelligence applications which can collect, organize, analyze and exploit the existent data to make decision without human’s intervention. Such applications are used on a large scale in the business environment, but they proved their worthiness also in national security and defence’s large information volume management. However, the idea of enlarging the artificial intelligence application for the command and control of weaponry systems has created many debates lately, at least inside the democratic societies, and some are still questioning the moral of leaving life or death decisions specific to war in the hands of machine learning. Therefore, the defence innovation board recommends for the future complex weaponry systems based on artificial intelligence to be „responsible”, ‘equitable”, “controllable”,  to offer “reliability” that they will not led to tragedies produced by mistake of lack of control.

Despite the many ambiguities related to artificial intelligence’s military apps impact, there is a quasi-general agreement on their potential to change war’s logic about the future. Moreover, there are many studies stating that artificial intelligence will be present on the battlefield even sooner than we would have expect it to, especially in asymmetrical conflicts. The thing that worries people today, at least when it comes to armed forces’ equipment, come from the possibility for violent non-state actors to actually use such systems. Hereof, unilateral action are not enough. Along with imposing clear ethical standards in super military technologies’ field, governments will also have to create a debate and cooperation space to adopt common concrete measure to provide killer robots’ proliferation stop and control.

Articles related to this topic

Pentagon’s new strategy on artificial intelligence and killer robots’ ethic

The new dictionary of defence: disruptive technologies

Translated by Andreea Soare