The Pentagon has signed contracts with seven of the largest tech companies to implement Artificial Intelligence (AI) systems within the US military’s secret communications. The Department of Defense announced the new contracts on Friday which will allow the US military to utilize AI to assist in making operational decisions on the battlefield.
The selected companies include Google, Amazon, Microsoft, Nvidia, Reflection, OpenAI, and SpaceX. Each of the seven companies has agreed to provide its capabilities in the form of AI technologies so that the military can process information and make better operational decisions in situations where they are under significant pressure due to the complexity of their mission.
It is not a party to these contracts because the company has been in a public disagreement and legal battle with the Trump administration regarding the ethics and safety in the use of AI in warfare.
Anthropic excluded after dispute over war-related AI use
Anthropic found itself excluded from the Pentagon’s contracts after demanding specific assurances. The company wanted guarantees that the military would not use its technology in fully autonomous weapons or for surveilling American citizens. Pete Hegseth, the Defense Secretary, took a firm stance, stating that Anthropic must authorize any lawful uses that the Pentagon deemed necessary.
The dispute escalated when President Donald Trump attempted to ban all federal agencies from using Anthropic’s Claude chatbot. Hegseth also sought to label the company as a supply chain risk, a designation typically reserved for protecting national security systems from foreign sabotage. Anthropic responded by filing a lawsuit against the administration.
OpenAI stepped in to fill the gap. The ChatGPT maker announced a deal with the Pentagon in March to effectively replace Anthropic’s Claude in classified digital environments. OpenAI announced on Friday that this latest agreement matches the terms it announced two months ago. The company stated that the people defending the United States should have access to the best technological tools available.
AI tools aim to speed targeting and logistics for military operations
The Pentagon has been rapidly accelerating its use of artificial intelligence over the past several years. According to a March report from the Brennan Center for Justice, AI technology can help the military reduce the time needed to identify and strike battlefield targets. The same tools can also organize weapons maintenance and manage supply lines more efficiently.
Emil Michael, the Pentagon’s chief technology officer, stated that it would have been irresponsible if they rely on one AI vendor. He explained that the Pentagon made sure that they had multiple alternative vendors if the first vendor, with whom they had been working, chose not to support the Department’s vision for collaboration. Michael also emphasized that open-source AI models remain a priority for providing an “American alternative” to China’s rapidly advancing AI systems.
Military personnel are already using these AI capabilities through the Pentagon’s official platform called GenAI.mil. The Defense Department stated that warfighters, civilians, and contractors are now putting these tools to practical use. The technology is cutting many tasks from months down to just days.
Experts warn of automation bias and urge human oversight
Concerns about military use of Artificial Intelligence increased with the recent fighting between Israel and Gaza and Lebanon. Some U.S. technology firms reportedly supported Israel in locating and targeting enemy forces through the use of AI, causing the death of many innocent people.
The rise in civilian casualties has created increased concern that the use of these tools. Helen Toner, acting Executive Director of The Center for Security and Emerging Technology at Georgetown University, reported that the parameters around how much human involvement, what is an acceptable degree of risk, and how operators are trained have not been settled.
Toner warned about a phenomenon called automation bias, where people tend to assume machines work better than they actually do. She stressed that operators need proper training to avoid over-trusting AI systems.
One of the contracted companies included language in its agreement requiring human oversight for any missions where AI acts autonomously or semi-autonomously. The same agreement also mandated that AI tools must operate consistently with constitutional rights and civil liberties.
The Pentagon has stated that emerging capabilities using AI will equip Warfighters with the tools necessary to operate with confidence and provide advanced safeguards to protect the country from any threat. However, experts agree that the use of AI best as a helper when carrying out repetitive tasks such as predicting helicopter maintenance or identifying the difference between civilian and military vehicles from drone surveillance footage.
Even the most sophisticated AI tools are only as secure as the networks they operate on. With Russian military hackers hijacking home routers to spy on users worldwide, the threat of compromised infrastructure looms over every digital defense initiative. The Pentagon’s push for AI superiority must go hand-in-hand with aggressive efforts to secure the global routing and switching infrastructure that hostile actors are actively exploiting.