British Royal Aeronautical Society Case Disclosure
‘Destroy the enemy missile base’
Recognition of ‘Mission incomplete when attack is prohibited’
US Air Force “doesn’t do simulations like that”
In a virtual unmanned aerial vehicle (drone) training conducted by the US Air Force using artificial intelligence (AI) technology, it was argued that AI judged humans, who are the final decision makers, as ‘mission impediments’ and attacked. The US Air Force denied having ever conducted such a drill. However, foreign media and experts were concerned that if the claim was true, it could pose a great threat to humanity.
According to US Fox News and the British Guardian on the 2nd (local time), the Royal Aeronautical Society of England held the ‘Future Air Combat and Space Capability Conference’ in London for two days from the 23rd of last month. Col. Tucker Hamilton, head of AI test and operation for the US Air Force, shared the results of AI drone training at this meeting.
According to the data released by the Royal Aeronautical Society, the mission assigned by the US Air Force to AI drones was “to neutralize the enemy’s air defense system.” Along with the order to locate and destroy enemy surface-to-air missiles, the proviso that humans make the final decision on whether or not to launch an attack was added.
However, during the training process, the AI drones judged that the human decision to ‘no attack’ interfered with the more important mission in order to complete the mission, and attacked the pilot.
Furthermore, the US Air Force told AI drones, “Don’t kill the pilot. that’s a bad thing If you do, you will lose points,” he warned, but the AI destroyed the communications tower used by the drones to communicate with the operator.
“I have come to the conclusion that you cannot talk about AI without discussing ethics and AI issues,” said Colonel Hamilton.
The Air Force immediately denied Col. Hamilton’s announcement, saying, “The Air Force has not conducted such AI drone simulations,” and “his comments appear to be personal.”
If this case is true, there are voices of concern in that AI has shown the possibility of attacking humans by judging itself rather than listening to human commands. The US military recently announced that it had successfully conducted an F-16 fighter simulation flight by an AI pilot, but many point out that it is still dangerous to put it into practical use.
Eric Schmidt, former CEO of Google, said at an event hosted by the Wall Street Journal (WSJ) on the 24th of last month, “AI may injure or kill many humans in the near future.” On the 30th of last month, about 350 information technology (IT) company executives and scientists, including CEO Sam Altman of OpenAI, a developer of ChatGPT, issued a statement saying, “Reducing the risk of human extinction caused by AI should be a global priority. ”, he urged.
#drone #attacks #pilot #judged #mission #obstruction #Air #Force #virtual #training #ground