A scientist has said that the idea that robots might one day be able to tell friend from foe is deeply flawed, and that’s why they can’t be trusted with weapons.
According to a report in New Scientist, the assertion was made by roboticist Noel Sharkey of the University of Sheffield in the UK.
He was commenting on a report calling for weapon-wielding military robots to be programmed with the same ethical rules of engagement as human soldiers, when he made the statement.
The report, which was funded by the Pentagon, says firms rushing to fulfil the requirement for one-third of US forces to be uncrewed by 2015 risk leaving ethical concerns by the wayside.
“Fully autonomous systems are in place right now,” warned Patrick Lin, the study’s author at California State Polytechnic in San Luis Obispo.
“The US navy’s Phalanx system, for instance, can identify, target, and shoot down missiles without human authorization,” he said.
While Sharkey applauds the report’s broad coverage of the issue, he says it is far too optimistic.
“It trots out the old notion that robots could behave more ethically than soldiers because they don''t get angry or seek revenge,” he said.
“But robots don’t have human empathy and can’t exercise judgement, and as they can’t discriminate between innocent bystanders and soldiers, they should not make judgements about lethal force,” he added.