Perfectly Transparent
OpenAI's contract with the DoD may already permit mass surveillance and autonomous weapons.
Edit: If you haven’t already, you should probably just read this instead:
If you follow tech and you haven’t been living under a rock, you’ve probably already heard this story, but I feel that mainstream media coverage has buried the lede. Anthropic imposed straightforward prohibitions on Claude’s use by the DoD. OpenAI claimed to have the same red lines, but permits the DoD to use its AI for “all lawful purposes.”
Collecting and analyzing “commercially available information,” or CAI, is straightforwardly considered lawful, despite giving detailed insight into individuals’ personal lives. Additionally, developing and deploying lethal autonomous weapons systems (LAWS) is only partially regulated by a DoD directive, not fully prohibited by US law. Therefore, it seems likely to me that the DoD intends to use OpenAI’s technology to analyze CAI, and also has the option to use it for LAWS.
The rest of the post expands on this; if you’re already up to date, I suggest skipping to the “Contract Wording” section.
Background
The Department of Defense had access to Anthropic’s Claude for use in classified applications through Palantir. They apparently found it useful, using it for the mission to capture Maduro. However, they became frustrated with Anthropic’s restrictions on its use. Anthropic offered to relax some restrictions but affirmed two:
1. No fully autonomous weapons
2. No mass surveillance of US citizens
The DoD demanded permission to use Claude for “all lawful purposes.” They threatened to designate Anthropic as a supply chain risk, threatening Anthropic’s customer contracts. They also threatened to use the Defense Production Act (DPA) to force Anthropic to give them access to Claude. Taken together, these are nonsense; Claude cannot be critical to defense if Anthropic is a supply chain risk.
Anthropic did not back down, posting their response on Thursday. Defense Secretary Pete Hegseth announced that the Pentagon would designate Anthropic as a supply chain risk as threatened. Additionally, Trump announced that the rest of the Federal Government should phase out Claude within six months.
Last night, OpenAI announced that they were signing a contract to step into Anthropic’s shoes. The kicker is that they claim to be enforcing the same “red lines” as Anthropic. So, did the DoD cancel the contract just to spite Anthropic, or did OpenAI give them something Anthropic didn’t?
OpenAI’s contract
The gist seems to be that Anthropic absolutely forbade these things, whereas OpenAI only forbids them contingent on them being illegal. Both DoD leaks and OpenAI’s contract support this interpretation. As established above, both CAI analysis and autonomous weapons are already legal.
Up until the last minute, the DoD was willing to compromise with Anthropic if it “allow[ed] the collection or analysis of data on Americans, from geolocation to web browsing data to personal financial information purchased from data brokers.”
Contract Wording
Sam Altman and OpenAI’s own announcements are implying as hard as they possibly can that OpenAI will also prohibit these things, without actually saying so.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force... The DoW agrees with these principles ... and we put them into our agreement.” They’re in the agreement--but in what form?
As if in response to skeptical takes on Hacker News, today OpenAI posted another statement where they re-affirmed, in the strongest possible terms, their commitment to not-quite prohibiting these applications. They helpfully include the specific language from the contract (emphasis mine:)
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
This wording repeatedly qualifies OpenAI’s prohibitions as dependent on existing restrictions. Anything not forbidden is permitted. As mentioned previously, collecting and analyzing CAI is already permitted. OpenAI’s denial that their tech could be used for a “social credit” system seems addressed to anyone who notices this. However, it only prohibits automated decisions. AI could still be used to identify and track citizens, so long as any final actions were approved by humans.
Additionally, autonomous weapons systems are only partially restricted by DoD Directive 3000.09. The language in the contract itself refers to 3000.09 but indicates only that it requires rigorous testing for LAWS, not that it completely forbids them.
OpenAI’s statement also includes lots of language about how the contract will be enforced. Obviously, this is meaningless if the contract itself does not forbid anything.
What about the FAQ? Two entries appear to affirm the prohibition without qualifications.
Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?
No. Based on our safety stack, our cloud-only deployment, the contract language, and existing laws, regulation and policy, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance.
Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance on U.S. persons?
No. Based on our safety stack, the contract language, and existing laws that heavily restrict DoW from domestic surveillance, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance.
[...]
What if the government just changes the law or existing DoW policies?
Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.
I’m somewhat uncertain about this; on the face of it, this seems like a stronger commitment than the previous ones. If it is, it seems odd that it’s buried in the FAQ. Additionally, this is obviously not legally binding. What matters is the contract, which doesn’t inspire faith. Also, OpenAI may be using a narrow, technical definition of surveillance which excludes CAI. I don’t see a similar loophole in the commitment against autonomous weapons, but I am not a lawyer.
Conclusion
The wording of OpenAI’s contract with the DoD only prohibits mass surveillance and autonomous weapons contingent on other laws and regulations. The US government and intelligence agencies are already allowed to use commercially available information such as location, web browsing, and financial information, and have been collecting it for years. Autonomous weapons (LAWS) are also not firmly prohibited by any existing laws or regulations.
To me, the conclusion seems clear. The Pentagon ended its contract with Anthropic over their refusal to process CAI, and possibly develop autonomous weapons. They happily signed a new contract with OpenAI because, despite their claims, it actually permits them.
