The Pentagon will employ tech giant Google’s Gemini AI system used on the its classified networks, technology news outlet The Information first reported this week. The use of the Google platform follows agreements similar to those the Pentagon has forged with artificial intelligence developers, including OpenAI and xAI.
Secretary Pete Hegseth has pushed for greater adoption of AI within the United States military, with a goal of creating an “AI-first warfighting force.”
Google’s AI technology has already been used on unclassified systems within the Department, but it will now move to classified systems. How it might be employed isn’t clear, but AI has been adopted to analyze drone footage, eliminate pay discrepancies, analyze intelligence, and provide targeting support.
According to the report from The Information, Gemini AI will require adjustments to AI safety settings and filters, but the contract states, “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”
The Department has not commented on the use of Gemini AI.
“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” a spokesperson for Google told Reuters.
Shifting Relationships
Google’s relationship with the Pentagon has been “inconsistent,” often marked by internal conflict within the tech firm.
In 2018, Google left a military AI project dubbed “Project Maven,” following a massive employee revolt. Google is now moving forward with a new partnership, even though the AI development community remains cautious about working with the Pentagon.
“Google’s classified agreement with the DoD marks a fundamental shift in the relationship between frontier AI labs and national security. By agreeing to the ‘any lawful government purpose’ clause and relinquishing its veto power over model filters, Google has effectively moved from a vendor of a finished product to a provider of raw military infrastructure,” explained John Carberry, solution sleuth at cybersecurity provider Xcape, Inc.
Carberry told ClearanceJobs that the business impact for the broader security community is a clear signal that the guardrails protecting commercial Large Language Models (LLMs) are a policy choice, not a technical constant.
“As the Pentagon gains the right to modify safety settings for classified missions, the ‘safety’ of an LLM becomes a variable dial rather than a fixed standard,” Carberry added.
Public Relations Language Regarding Large Language Models
As noted, it is unclear exactly how Google’s Gemini AI technology will be employed in the Pentagon’s classified networks, but Jacob Krell, senior director of Secure AI Solutions & Cybersecurity at Suzu Labs, told ClearanceJobs that Google may not have much say.
It may have put up so-called guardrails that include ethical commitments against autonomous killing and surveillance, but what that means is unclear.
“The guardrails in this agreement are public relations language, not operational controls. Google cannot veto how the government uses the technology, and the Pentagon can request modifications to safety filters,” said Krell.
“The contract lives on a classified network where Google has no visibility into how the AI is deployed. Stating that AI should not’ be used for mass surveillance or autonomous weapons without oversight, while simultaneously relinquishing the authority to enforce that position, is managing public perception. It is not a safeguard.”
The tech giants, especially in the AI space, are finding a way forward with the government. But as has been seen with other products, it is hard to put up guardrails and enforce them without impacting national security needs.
The only option is to opt out of the defense sector, but that could result in being eliminated from other opportunities within the federal government.
“The broader pattern is now complete,” Krell continued. “OpenAI signed. xAI signed. Google signed. Anthropic refused the same terms and was designated a national security supply chain risk by the Pentagon, a label historically reserved for foreign adversaries. The procurement environment is not asking AI companies to participate in national security. It is telling them the cost of refusal. Every commercially motivated AI lab absorbed that lesson the moment Anthropic was blacklisted.”
Krell suggested such an outcome was inevitable, and explained that the technology behind AI is too capable to remain outside classified military and intelligence systems.
“The question was never whether frontier AI would enter national security operations, but whether the companies building it would retain meaningful oversight once it did,” Krell added. “Google answered that question by removing its own weapons and surveillance pledge fourteen months before signing this deal. The destination was decided long before the contract was finalized.”
The AI Genie is Out of the Bottle
AI firms can opt out, but they risk being blacklisted; and another firm will step up. Likewise, potential adversaries, including China and Russia, may knock down the same guardrails that the U.S. tech firms would like to see in place.
The AI genie is out of the bottle, and there is no way of getting back in.
“For security practitioners and executives, this highlights a growing divergence: while enterprise AI remains shackled by rigid safety policies to mitigate corporate liability, military-grade deployments will prioritize operational utility and ‘mission planning’ over traditional alignment,” suggested Carberry. “This transition necessitates a new class of AI security – one focused on ‘mission-ready’ robustness rather than just conversational harm prevention.”
Carberry further told ClearanceJobs that defenders must now prioritize securing the supply chain of these “unfiltered” models, as they are now tier-one targets for adversaries seeking to reverse-engineer the logic behind U.S. military decision-making.