Within hours on Friday, the Pentagon blacklisted one AI company for refusing to drop its safety commitments on surveillance and autonomous weapons, then turned around and praised a competitor for signing a deal that supposedly preserved those exact same commitments.
This confused some people. Why would the Pentagon seek to destroy one company over the same terms it agreed to with its largest competitor just hours later?
There’s an answer though: the words in OpenAI’s contract likely don’t mean what most people think they mean.
This isn’t speculation about future abuse. It’s the documented operating procedure of the NSA for decades—a practice exposed repeatedly by whistleblowers, litigated in courts, and eventually confirmed in declassified documents.



It continuously astonishes me that people spent so much time looking at the “red lines” - pure spectacle - promised by OpenAI/Anthropic, and promoted by mass media that hailed one as a hero and the other as a villain - that they never bothered even a cursory glance at the things both companies accepted, which were basically the same.
How wrong I was. This wasn’t a case of one company accepting 99% of Trump’s surveillance demands and another accepting 100%. They were identical:
For a little additional context, I found this helpful breakdown: OpenAI and Anthropic appear ethically identical, identically evil. The difference is cash money.