top of page
  • Writer's pictureKaty Craig

On the Need for Killer Robots

I asked ChatGPT, the open artificial intelligence chat robot to explain how intelligent machines may be justified on the battlefield and it delivered! What follows is its response to me. I chopped it up into talking points so I could comment on some of the main ones. Finally, I offer that the Law of Armed Conflict provides guidelines to cover the use of intelligent machines, and ChatGPT's rebuttal is included in its entirety at the end.

ChatGPT: Intelligent machines, such as drones, robots, and artificial intelligence (AI) systems, have the potential to significantly enhance the effectiveness and efficiency of military operations, particularly in the realm of warfare. However, the use of intelligent machines in warfare raises a number of ethical and moral [my emphasis] questions, particularly with regards to the accountability and responsibility for their actions.


k80cb: Human command and machine control means the human is always accountable even though responsibility can be delegated into an intelligent machine. This is no different than human command and control. It's delegation of responsibility and retention of accountability.


ChatGPT: On the physical battlefield, intelligent machines can be used to perform a variety of tasks, such as surveillance, reconnaissance, and target identification. They can also be used to deliver weapons and explosives, or to engage in direct combat. The use of intelligent machines in these roles can potentially reduce the number of human casualties, both among military personnel and civilians.


k80cb: Target identification is legally defined and practiced in war to ensure a commander's ethics and our morals, engineered into the programming, training, and testing are not forgotten in the heat of battle. They already deliver weapons and explosives (Phalynx and Aegis weapons systems). There may be a moral imperative to prevent loss of life by employing intelligent machines. Avoiding bloodshed is a benefit in my opinion.


ChatGPT: However, the use of intelligent machines also raises concerns about the accountability and responsibility for their actions. If an intelligent machine causes harm or injury to someone, who is responsible for that action? Is it the person who programmed the machine, the person who deployed it, or the machine itself? These are complex moral and ethical questions that need to be carefully considered.


k80cb: Command and control is no different whether with machines or people. Current military doctrine ensures a human in the loop to ensure ethical and moral values are part of the kill chain. Accountability falls on the trainers and testers of the capability, not the user, unless it's willful disregard and active controls bypass. Also, one set of values, ethics, and morals are not shared across the globe. Nothing is stopping our adversaries from accelerating their use of intelligent machines on the battlefield. We would be remiss and morally inept if we failed to compete to defend our way.


ChatGPT: In the realm of cyber warfare, intelligent machines can be used to identify and neutralize cyber threats, such as malware and hackers. They can also be used to disrupt enemy communications and infrastructure, or to gather intelligence. Again, the use of intelligent machines in these roles can potentially reduce the number of human casualties and minimize the risk to military personnel.


k80cb: Agreed. These are benefits in my estimation. However, they benefit everyone who uses them, but they can be employed in different ways for different aims. They're "table stakes." The only way to play is to get into the game and enable the use of intelligent machines in the cyber domain, too. Otherwise, we would be helpless against advanced persistent threats intent on disrupting the west and our notions of right and wrong. This is the new cold war. Deterrence must be part of the equation.


ChatGPT: However, the use of intelligent machines in cyber warfare also raises ethical concerns, particularly with regards to the principles of proportionality and distinction. These principles require that the use of force be limited to what is necessary and proportionate to the threat, and that civilians and civilian infrastructure be protected from harm. It is important to ensure that the use of intelligent machines in cyber warfare is consistent with these principles, and that they are not used to cause undue harm or suffering to civilians.


k80cb: These concerns are mirrored on the physical battlefield, too, so knocking out a civilian residential block via kinetic or cyber attack is questionable. Does it meet the definition of target, for example? However, the argument could be made that knocking out the residential block's communication could happen without physical harm or loss of life via cyber means so it would be justifiable to some.


ChatGPT: Overall, the use of intelligent machines in warfare presents both opportunities and challenges, and it is important to carefully consider the ethical and moral implications of their use. It is ultimately up to society and governments to determine the appropriate role and use of intelligent machines in warfare, and to establish clear guidelines and rules for their use.


k80cb: I believe we have the guidelines and some rules in place with the Laws of Armed Conflict and the notion of human command and machine control. Ethically, humans must always be in the kill-chain and are ultimately accountable for loss of lives that result from the use of intelligent machines on the battlefield.


ChatGPT: The Law of Armed Conflict (also known as International Humanitarian Law) is a set of rules and principles that govern the conduct of hostilities and seek to minimize the suffering and destruction caused by war. It applies to all parties to an armed conflict, whether they are states, non-state armed groups, or individuals. The Law of Armed Conflict does address a number of ethical and moral issues related to the use of force in warfare, including the principles of distinction, proportionality, and necessity. These principles require that the use of force be limited to what is necessary and proportionate to the threat, and that civilians and civilian infrastructure be protected from harm.

However, the use of intelligent machines in warfare raises a number of additional ethical and moral questions that are not specifically addressed by the Law of Armed Conflict. For example, the use of intelligent machines raises questions about the accountability and responsibility for their actions, particularly if they cause harm or injury to someone. It is also unclear how the principles of distinction and proportionality apply to the use of intelligent machines, particularly in the realm of cyber warfare.

As a result, it is important to carefully consider the ethical and moral implications of the use of intelligent machines in warfare, and to establish additional guidelines and rules for their use that are consistent with the principles of the Law of Armed Conflict. This may involve developing new legal frameworks or adapting existing ones to address the unique challenges and opportunities posed by the use of intelligent machines in warfare. ~ k80cb

Recent Posts

See All

תגובות

דירוג של 0 מתוך 5 כוכבים
אין עדיין דירוגים

הוספת דירוג
bottom of page