It comes as no surprise that the Pentagon is testing artificial intelligence to see whether it can be used in the event of a large-scale conflict.

The US currently runs experiments using five large language models (LLMs) in collaboration with other countries that have yet to be disclosed. These LLMs are what power the ai tools that we use frequently, such as OpenAI’s ChatGPT or Google’s Bard.

It has been reported that one of the LLMs being used for such experiments is called “Donovan” which was created by a San Francisco-based start-up called ScaleAI. Although the Pentagon has not shared the other LLMs it has been working with as of writing this article. These experiments are expected to last until 26th July, when we are expected to learn more about what type of experiments were occurring.

Donovan is a defence-focused AI platform.

The models were given classified information to generate responses for real-world situations, allegedly including some of these situations include conflicts with China.

Matthew Strohmeyer has told Bloomberg that the initial tests were very promising and highly successful. It was very fast for what it was set out to do.

[we are] “learning that this is possible for us to do” but added “that it’s not ready for primetime right now”.

Colonel Strohmeyer

In one of these tests, an AI completed a request within 10 minutes. This can be monumental for the US military and the Pentagon, as it is well-known that requesting information from a specific part of the military can take staffers hours or even days to complete.

Whenever there is a new technology that is released into the world, the military will follow it closely. We have already learnt of rumours of an AI ignoring instructions and overriding a mission during a simulated test by the US Air Force. Without a doubt, the LLMs being used by the military currently are not the heavily closed-off and protected models that we use in our day-to-day lives.

We are already in the dark about safeguarding humans against ai and we are still not in a complete universal agreement in which alignment should look like. With such tests, it can be almost confirmed that what the military has access to does not care about such matters and is primarily focused on efficiency and achieving its goal.

What could potentially be worrying is that adding ai into the warfare equation could start a paradigm shift entirely. Information that has normally taken days or weeks to collect and process will now be available within hours or minutes. How much does this change warfare and logistics?

Another potential scenario that is entirely plausible is military commanders will no longer have to worry about emotion interfering with decision-making.

Are we prepared for warfare that is dehumanized beyond anything we have ever seen before? AI will always favour efficiency and the objective before anything, whereas a human may not.

Given the potential risks involved, it seems incredibly dangerous to allow the military to have access to a technology that is not fully understood yet. The future of warfare might look incredibly bleak in the future once the human emotion element is no longer existent.

Are we prepared for a world where AI can make decisions that dictate such outcomes? Are we ready for its aftermath too?