A US Air Force Colonel has retracted his statements on the alleged AI drone attack that occured in May, and states that he misspoke on the incident.

The US Air Force Colonel, Tucker Hamilton, allegedly made an outlandish statement at a conference stating that an AI drone had killed its operator in a simulated test because the pilot tried to override its mission.

The Royal Aeronautical Society initially circulated a blog post that described Hamilton’s presentation, causing concern over the use of AI in weaponry. According to the blog post, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence that was trained to identify and kill its targets.

However, in the same tests, it was also said that the instructor had told the AI powered drone not to kill its targets in certain situations.

What is striking is that according to the story, instead of complying with the instructor with these new conditions, it had resulted in the AI drone killing its operator instead.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,”

Tucker Hamilton

First of all, its worth noting that no one got harmed in this simulated test, and this claim was quickly refuted by the US Air Force. Hamilton would also go on to signify that he “misspoke” and also rejected his previous claims that such a test occured.

Could an AI drone attack lead to Skynet in the future?
Are we going down a dangerous path mixing AI & military?

The Royal Aeronautical Society responded in a statement that Hamilton had retracted his statements and clarified that such an experiment did not occur, and was simply misunderstood.

Tucker also mentioned that it was just a hypothetical thought experiment that also has every possibility to become reality in the not-so-distant future. He states that such a thought experiment is valuable for evaluating the challenges posed by AI and the use of AI technologies in crisis zones.

Are we heading down a dangerous path involving AI and the military?

This whole ordeal raises a lot of questions on the timing of such an event, as public trust in AI is at an all-time low, and fears of mass unemployment caused by AI circulating the news constantly.

We have also seen an AI task force advisor going on to say that “time is ticking” to make AI safe. We hardly have the necessary infrastructure in place for conventional AI, incorporating this technology into the military is incredibly dangerous.

This is also why Prof. Yoshua Bengio, who is regarded as one of the three computer scientists described as the godfathers of AI, has come out to say that the military should not have “AI powers at all”.

With AI funding and spending to reach dizzying levels, with annual corporate investment in AI being 13 times greater than a decade ago. It could be argued that such a test did occur within the US Air Force but had to be quietly hushed away to stop any further decrease in public opinon, as it might impact investment and innovation in the future.

Whatever the cause may be, and whatever happened with that test, there is no doubt that we are entering a potentially dangerous period of AI advancement that has reached the military.

This is concerning, and we would be wise to further speed up the process of regulating and safeguarding AI, or Skynet might become a reality in our not so distant future.