Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons — Anthropic refuses to lower AI guardrails for the Pentagon
Source
Published
TL;DR
AI GeneratedAnthropic has refused to lower AI guardrails for its AI system, Claude, as requested by the Pentagon. The company cited concerns about mass surveillance and fully autonomous weapons, stating that it cannot in good conscience comply with the Department of Defense's demands. Anthropic emphasized the risks of AI-led surveillance on individual liberty and the lack of human-like judgment in fully autonomous weapons. The company offered to improve reliability for autonomous systems but was turned down by the DoD. The consequences of not complying include the cancellation of a $200 million contract and potential designation as a supply-chain risk by the Defense Secretary.