A U.S. Air Force colonel who had told how during a simulation a drone controlled by Artificial Intelligence (AI) had turned against its human operator, now claims that he misspoke and that the experiment never happened, that it was all a «mental exercise».
On May 23-24, the Royal Aeronautical Society hosted the Future Combat Air & Space Capabilities Summit at its London headquarters, bringing together nearly 70 speakers and more than 200 delegates from the military, academia and the media from around the world to discuss the size and shape of future combat air and space capabilities.
During the conference, various topics were addressed, including the development of Artificial Intelligence and its application on the future battlefield.
The Skynet Situation
One of the presentations was given by Col. Tucker «Cinco» Hamilton, USAF Chief of Artificial Intelligence Test and Operations, who offered insight into the advantages and perils of more autonomous weapons systems.
Having been involved in the development of the life-saving Auto-GCAS system on F-16s, Hamilton is now involved in cutting-edge flight testing of autonomous systems, including unmanned F-16s capable of aerial dogfights (an experiment called Alpha Dogfight). However, he warned of the danger of relying too much on AI, noting how easy it is to fool it and that it is capable of creating very unexpected strategies to achieve its goal.
As Hamilton reportedly recounted, during a simulated test, an AI-enabled drone was tasked with a SEAD mission to identify and destroy SAM sites, with the final decision on whether or not to attack being made by a human. However, after being «boosted» in training to make destruction of SAM batteries the preferred option, the AI decided that the best way to achieve its primary objective was to take out the human operator who had the ability to call off the attack order.
Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
See also: Artificial Intelligence: Lockheed Martin and Red Hat to collaborate on Military Drone Systems
Repercussions in the global media
Alarm bells rang in both specialized and mass media, as an artificial intelligence had turned against its human creators, somehow confirming the apocalyptic forecasts for humanity predicted in science fiction works such as Terminator, Matrix, and so many others.
It was all a misunderstanding…let’s hope
Today, however, Colonel Tucker «Five» Hamilton contacted AEROSPACE magazine (publication of the Royal Aeronautical Society) to clarify that he «misspoke» during his presentation at the Future Combat Air & Space Capabilities Summit and that the «rogue AI drone simulation» was a «thought experiment,» just a hypothetical case based on plausible scenarios and likely outcomes, rather than an actual USAF simulation.
UPDATED: Highlights from the Future Combat Air and Space Capabilities Summit #AI #drones #GCAP #Tempest #USAF #RAF #FCAS #FCAS23 https://t.co/cNgqzIP50g pic.twitter.com/DU2XQLrPPj
— Royal Aeronautical Society (@AeroSociety) June 2, 2023
Hamilton clarified: «We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome». He assured that the USAF has not tested any armed AI in this way (real or simulated) and states, «Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI».
According to Hamilton, the purpose of his presentation was to emphasize that one cannot talk about artificial intelligence without also developing an ethics of artificial intelligence.