Air Force colonel misspoke when he said drone killed operator

85
2
Air Force colonel misspoke when he said drone killed operator

A US Air Force colonel misspoke when he said last month that an aircraft drone killed its operator in a simulated test because the pilot was attempting to override its mission.

The confusion began with the circulation of a blogpost from the society in which it described a presentation by col Tucker Cinco Hamilton, the chief of AI test and operations with the US air force and an Experimental Fighter Test Pilot, at the Future Combat Air and Space Capabilities summit in London in May.

Hamilton told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.

The comments triggered deep concern over the use of AI in weapons and extensive online discussions. The American air force on Thursday denied that the test was conducted. The Royal Aeronautical Society said on Friday that Hamilton has retracted his comments and clarification that the rogue AI drone simulation was a hypothetical thought experiment.

The US government is still grappling with how to regulate artificial intelligence. AI ethicists and researchers have echoed concerns over the technology, contending that while there are ambitious objectives for the technology, such as potentially curing cancer, for example, the technology is still far away. They note that there are currently unreliable evidence of existing harms, such as increased use of unreliable surveillance systems that misidentify Black and brown people, the perpetuation of misinformation on numerous platforms, and the potential harms of using nascent technology to power and operate weapons in crisis zones.

You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you re not going to talk about ethics and AI, Hamilton said in a speech in May.

While Hamilton contends the simulation did not actually happen, Hamilton maintains the thought experiment is still a worthwhile one to consider when navigating whether and how to use AI in weapons.

This illustrates the real-world challenges posed by AI-powered capabilities and is why the Air Force is committed to the ethical development of AI, he said in a statement clarification clarifying his original comments.

In a statement to Insider, the US Air Force spokeswoman Ann Stefanek said the colonel's comments were taken out of context.

The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology, Stefanek said.