The challenge posed by Lethal Autonomous Weapons


There are several significant challenges that are posed to the campaign against L.A.W.’s. One of these is that right now, already in significant and growing numbers across several nations are military drones used for surveillance and destroying targets from a distance. These are generally referred to as Unmanned Aerial Vehicles (U.A.V.’s)and include various American models such as Global Hawk, Reaper, Predator, Grey Eagle and others.

However I am talking about a type of weapon that is likely to start appearing in the near future. I cannot quite envisage what one would look like, but I would assume it to be like a drone or – possibly later on – an upright robot, with lethal capability, that can function without human input. And these are not some imagined weapon system inspired by science fiction so much as an ethically questionable and soon to be taken next step from the development of U.A.V.’s as military weapon systems.

Drones have a controversial record in terms of military applications. Their soaring use in Somalia, Yemen and also around Pakistan and Afghanistan by the United States military has been raising questions for years. President’s George W. Bush, Barak Obama and now Donald Trump have all escalated their use in the absence of conventional air power for dealing with targets. Tragically a large number of strikes have ended disastrously with civilians targetted at funerals, weddings and on family holidays, and not surprisingly the Governments of the nations where these strikes have occurred have strongly remonstrated with the operators of the drones – almost exclusively the United States military.

New Zealand has an interest as a nation of peace in ensuring we have no part in the development of what I expect will be a weapons system that even on its best day will find itself a foul of international law. L.A.W.’s represent a move into a future type of warfare where man is not the actual combatant any longer and that his ability to make battle field specific decisions will be increasingly done by machines.

From 3,000 kilometres away at the moment, a controller in the U.S. Airforce or Army will be watching a target with a view to determining whether or not an assassination strike is feasible. They will be making a split second judgement on whether to permit the drone to fire a Hellfire rocket that a split second later explodes in a fireball as it crashes into a target that might be a car, a house or some sort of armoured vehicle. There might be children playing in the streets, or people at the market buying food. The drone controller can instruct the drone to pull back and way further instructions. For a terminator the difference might not be much, but it is potentially disastrous. From 3,000 kilometres away or more, a controller at a computer will be watching really high resolution imagery being fed to them by the camera on the device. They will be able to see everything including the potential target. It sees a potential suspect outside a house with contacts. They are doing something, and there are children kicking a football around. Too close, but how will they tell the L.A.W. to not fire its weapon?

L.A.W.’s are coming and they represent an extremely dangerous development in military drone technology. There is a closing window of time to build up a coalition of nations that refuse to have anything to do with them. The military industrial complex will not be happy and nor will some politicians both in domestic and international circles, but do we honestly really need to add L.A.W.’s to human-kinds already dreadfully diverse array of killing people?

I think not.