Thought Leadership

Image
Kampfroboter
Flash War – the killer robots are coming

It is not only the quantity, but above all the quality of the armament since the beginning of the 2020s that is alarming. All superpowers have been feverishly working for years on the development of so-called ‘lethal autonomous weapon systems’ (LAW), also known as ‘killer robots’. These include robots that shoot each other, self-directed drones, autonomous fighter aircraft, and who knows what else is being developed in military laboratories. At its core, it is about weapons of mass destruction that are capable of destroying largely without human intervention through artificial intelligence (AI). These AI weapons are designed to select targets independently, fire independently and continuously learn how to do the most damage to the enemy through ‘machine learning’. ‘Flash War’ is what experts call the use of these soulless killing machines.

The German Federal Foreign Office described the horror scenario of the future as follows in 2019: ‘Shooting robots, large-scale hacker attacks on power supply systems, or AI-supported new types of missile systems have long since become a real danger for warfare in the 21st century. Artificial intelligence in weapons systems must not lead to killer robots waging the wars of the future without any human control. We need an international ban on these systems.

UN has been dealing with autonomous weapons since 2014

Since 2014, the United Nations has been trying to put a stop to the soulless activity. The discussions took place primarily from the point of view of ethics and international law. The proponents argued that fewer people would die because autonomous weapons could aim more accurately than humans. The opponents rejected the principle of robots being able to decide independently on human lives, regardless of their form.

At the United Nations, a ban on lethal autonomous weapons was not even seriously considered from the outset because it seemed unrealistic. Understandably so, as many countries see autonomous weapons systems as the future of warfare and do not even consider a ban, as called for by reputable non-governmental organisations such as Amnesty International or the Diplomatic Council (DC). At least the UN has set itself the goal of establishing global rules for flash war. Because regardless of ethical considerations, there is a real danger that autonomous weapons could be used more quickly and carelessly by governments around the world because there are seemingly no victims to fear on the ‘sending side’. Misjudgements and misunderstandings could escalate more quickly, increasing the likelihood of ‘accidental wars’. Not least for this reason, at all previous UN conferences on the subject, the international community has largely agreed that the development and use of autonomous weapons systems should be subject to international law, beyond the ethical, legal, operational and technical challenges.

From ChatGPT to AI combat robots

But in an age when ChatGPT is part of everyday life, it is probably unrealistic to expect that arms manufacturers, of all people, will refrain from using AI weapons. It would be equally illusory to believe that world powers such as the USA, China or Russia will not spend enormous sums of money to achieve military AI superiority. Other states in Europe, the Middle East or Asia may feel compelled to develop AI weapons of their own in order to avoid becoming militarily defenceless in the future. This makes the next arms race seem inevitable. It is common knowledge that the US, China, Israel, South Korea, Russia and the UK are already heavily promoting the development of autonomous weapons systems.

Some countries with a strong arms export business, such as the Federal Republic of Germany, will certainly take the new generations of killers into account in their future national arms control policy. Nevertheless, you don't have to be a pessimist to see the danger that the killing machines could fall into the hands of terrorists. And it doesn't take a prophet to predict that the United Nations will continue to warn against precisely that. The UN will presumably also establish rules for this. But a general worldwide ban on AI-supported autonomous weapon systems by the UN that is actually enforced seems unlikely, to say the least.

It is to be feared that this development, which only a few decades ago seemed like science fiction, is on the way to becoming a reality and in some cases has already arrived there.