The U.S. military has made clear that any future robot weapons systems will always need manual authorisation before opening fire on human targets. The Department of Defense issued a new policy directive saying that any semi-autonomous weapons systems will be designed so they need human authorisation to open fire.
The promise comes after a Human Rights Watch report called for an international ban on ‘killer robots’, which the group warned could be deployed within 20 years.
Soon after that report was published, Deputy Defense Secretary Ashton Carter signed a series of instructions ‘to minimise failures that could lead to unintended engagements or to loss of control’ of armed robots.
Policy directive 3000.09 says: ‘Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator.’
In order to make sure this is the case, the Pentagon asks that the hardware and software controlling robot weapons comes equipped with ‘safeties, anti-tamper mechanisms, and information assurance’. They must also be designed to have proper ‘human-machine interfaces and controls’. Above all, they must ‘allow commanders and operators to exercise appropriate levels of human judgement over the use of force’.
The Pentagon’s promise comes after a joint Human Rights Watch and Harvard Law School report raised the alarm over the ethics of allowing robots to take decisions as to when to open fire on humans.
Although no U.S. drone is yet able to pull the trigger without a human operator’s direction, the report warned that militaries worldwide are ‘very excited’ about machines that could one day be deployed alone in battle. Death from above: A U.S. Air Force MQ-9 Reaper drone takes off from Kandahar, Afghanistan.
( via dailymail.co.uk )