OpenAI’s “compromise” with the Pentagon is what Anthropic feared
Source
Published
TL;DR
AI GeneratedOpenAI recently announced a deal allowing the US military to use its technologies in classified settings, following a public reprimand of Anthropic by the Pentagon. OpenAI emphasized that the agreement protects against autonomous weapons and mass surveillance. While OpenAI claims to embed safety rules into its models for the military, it remains unclear how these rules differ from those for normal users. The company's approach differs from Anthropic's focus on specific prohibitions, instead citing laws and policies as its basis for working with the Pentagon. There are concerns about whether OpenAI's compromise will satisfy critical employees and how the Pentagon will transition from using Anthropic's AI model to OpenAI's amid escalating tensions in the Middle East.