Home » Technology » Artificial Intelligence » Sam Altman admits Pentagon deal was ‘sloppy’

Sam Altman admits Pentagon deal was ‘sloppy’

OpenAI boss Sam Altman describes the recent contract with the US military as “sloppy and opportunistic.” After a wave of uninstallations at ChatGPT, the company is now rowing back and promising contractual improvements against surveillance.

Altman admits mistakes in Pentagon deal

OpenAI boss Sam Altman has called his company’s recent agreement with the US Department of Defense “opportunistic and sloppy.” In a statement, he responded to the massive criticism that the AI ​​company received after it signed a contract with the Pentagon at short notice. The deal came just hours after rival Anthropic declined to collaborate due to ethical concerns. End-user reaction to the deal was overwhelmingly negative. As market research data shows, the rate of uninstalls of the ChatGPT app in the US increased by 295 percent last Saturday compared to the previous day. At the same time, 1-star ratings in the app stores shot up by 775 percent. Many users switched to the competing product Claude from Anthropic, which then topped the download charts in Germany.

The background to the dispute is Anthropic’s refusal to grant the Pentagon access to its AI models without strict guarantees against mass surveillance and use in autonomous weapon systems. The US government subsequently classified Anthropic as a “supply chain risk.” OpenAI initially used the resulting gap to close things quickly, which was seen internally and externally as a betrayal of its own security principles.

Contractual improvements

In order to smooth things over, Altman announced corrections to the contract. How CNBC reported, the agreement should be supplemented with specific clauses. They are intended to ensure that the AI ​​systems may not be intentionally used for domestic surveillance of US citizens. In addition, the Pentagon has assured that secret services such as the NSA will not have access to OpenAI services.

Altman emphasized that attempts were made to prevent an escalation between the industry and the department, which was renamed the “Department of War” under the Trump administration. “We were genuinely trying to de-escalate the situation and avoid a much worse outcome, but I think it just seemed opportunistic and sloppy,” Altman said. He also confirmed that OpenAI would develop technical protection measures that go beyond pure contractual texts.

Internal criticism

However, skepticism remains. Critics and even our own employees such as OpenAI researcher Leo Gao described the subsequent assurances as mere “eyewash”. The main problem lies in the definition of surveillance: While OpenAI wants to exclude direct surveillance, the analysis of commercially acquired data sets – such as movement data from fitness apps – remains a legal gray area.

OpenAI plans to use technical classifiers. These are intended to detect whether a request from the military violates the usage guidelines. Anthropic had previously argued that such policies alone are not enough, as security mechanisms are often disabled in classified networks.

Leave a Reply