Autonomous robots pose an undue risk to humans and human activity.
To see what is meant by this statement, an autonomous robot helping to perform an operation contributed to the death of a patient on an operating table in Britain when it knocked the hand of an assisting medic at a critical phase in the surgery. Whilst the robot did not make the fatal moves that ultimately led to the patients death a few days later, it was considered to have contributed.
I believe that medical procedures being done on humans are too valuable to have robotic assistance for a number of reasons, the one above being just one:
- Could a robot be programmed to tell minute differences in what is being operated on, such as for example the space between a piece of embeded shrapnel and something important like an artery
- Would robots have a broad enough understand of human language and interactions to not misinterpret something and subsequently behave in a way surgeons or other specialists might not anticipate
- In a time pressured situation such as someone losing much blood and the bleeding needing to be quickly stemmed, could a robot react in time or understand the urgency
The medical profession would be wary of any robot technology that cannot be over ridden by a human being since artificial intelligence is not (yet!) able to distinguish situational issues with the clarity that would be necessary to be an effective tool. But there is a bigger problem. Da Vinci – as the one in the article was known – and like robots will only be as good as the humans who designed them and work with them.
Another example of dangerous autonomous robots are the development of military robots. These will have the ability to determine themselves who to kill. Britain is thought to be funding the development of such weapons.
Both ethically and legally this raises very serious and immediately potent questions about the sort of military weapons that should be developed. Legally it enters a part of the Geneva conventions that is very grey and which has not been a priority for politicians in terms of overhauling. Ethically weapons that develop an operational mind of their own is highly improper at best.
Even if the drones are securely controlled and operated under strict parameters, there is also always the risk that cyber hacking could break into the drone and make it go rogue. In a politically charged environment where cyber attacks are frequent there is no such thing as a cast iron guarantee that such technology will be secure.
New Zealand politicians are perhaps 15-20 years behind on their understanding of technology and the ethical and legal challenges its applications pose. This is a rather broad statement, but also one that has serious truth to it. Therefore it is highly unlikely that they have given thought to the potential hazards of killer drones and the short comings of robots in a surgery environment – though admittedly in the latter, the robot is clearly supposed to be helping in a procedure that ultimately makes the human better.
Will our politicians get with the times before technological best practices in New Zealand start involving robots in situations where the human is not necessarily in control?