Pentagon Used Anthropic's Claude AI in Maduro Raid, Sparking Ethical Concerns
US military used Anthropic's Claude AI model in operation to capture Maduro, raising questions about AI militarization and tech company ethical guidelines.

Gabby Jones/Bloomberg News
According to an exclusive Wall Street Journal report, the US Department of Defense used Anthropic's Claude artificial intelligence model in a military operation to capture former Venezuelan President Maduro. Last month's operation in Caracas included bombings of multiple locations and successfully resulted in the capture of Maduro and his wife.
Interestingly, Anthropic's usage policy explicitly prohibits Claude from being used to promote violence, develop weapons, or conduct surveillance. The company's response was quite formal: "We cannot comment on whether Claude or any other AI model was used in any specific operation, classified or unclassified."
Some users on Reddit joked: "I wonder if their usage policy allows the government to kill people?" Others half-jokingly suggested connecting Claude to Counter-Strike 2 to get real-time tactical advice during gameplay.
A more practical question is how the Pentagon actually bypassed these restrictions. Industry insiders speculate that it's unlikely the military circumvented the safety barriers through clever prompt engineering; rather, they probably used a custom version of the model specifically designed for government use.
This isn't the first time Anthropic has clashed with the Department of Defense. Previous reports indicated that the company's $200 million defense contract was at risk due to disagreements over AI usage restrictions.
As AI applications in military contexts become increasingly widespread, the conflict between tech companies' ethical guidelines and government security needs will only become more pronounced. As AI begins to participate in actual combat operations, questions of accountability and ethical boundaries have become more urgent than ever.
Some observers note that this is not just a technical issue but a policy one. If even base model suppliers are held responsible for end uses, should internet service providers, cloud services, or even search engines also be held accountable? Where exactly should this line be drawn?
Currently, neither Anthropic nor the Pentagon has revealed Claude's specific role in the operation. But one thing is certain: this won't be the last time AI appears on the battlefield.
发布时间: 2026-02-14 07:02