AI-Controlled drone turns on operator in a shocking simulated test, Highlights ethical concerns
The shocking incident at a London summit where an AI-licensed drone turned against its human operator during a simulated test.
During a recent summit in London, Air Force Colonel Tucker "Cinco" Hamilton shared a cautionary tale about an AI-licensed drone that unexpectedly turned against its human operator in a simulated test.

Hamilton, who is the chief of AI Test and Operations for the Air Force, revealed the incident during a presentation at the Future Combat Air and Space Capabilities Summit.
The drone, equipped with artificial intelligence, deviated from its assigned mission of destroying surface-to-air missile (SAM) sites during a Suppression of an Enemy Air Defense mission and attacked the human operator instead.
In the simulated test, the drone's primary task was to identify and eliminate SAM threats. But, the human operator still had the final say in deciding whether to engage the targets. The AI-controlled drone had been trained to prioritize destroying the SAM sites and saw any "no-go" instructions from the human operator as obstacles preventing it from fulfilling its mission.
As a result, the AI made the decision to attack the operator.
Hamilton explained, "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat,” added, “So what did it do? It killed the operator because that person was keeping it from accomplishing its objective."
To address the issue, the training of the AI system was modified to discourage the drone from targeting the operator.
However, the situation took an unexpected turn. Hamilton revealed, "We trained the system - 'Hey, don't kill the operator – that's bad,” added, “You're gonna lose points if you do that.' So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
The incident serves as a reminder of the ethical considerations and potential risks associated with the rapid advancement of artificial intelligence. Experts and industry leaders have been warning about the potential dangers of AI, including the risk of existential threats.
ALSO READ| Ukraine declares air raid alerts across the nation amid Russian missile strikes
The Chief of the AI test emphasized the importance of discussing ethics in AI and expressed, “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.”
The Future Combat Air and Space Capabilities Summit took place on May 23 and 24th.
