| The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios. "Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was," the official said. The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law. Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it. Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons. [link] [comments] |
Pentagon's use of Claude during Maduro raid sparks Anthropic feud
Here's something you might not have seen coming — during a covert operation to capture Nicolás Maduro, the Pentagon reportedly used Anthropic's AI model, Claude. And get this — according to /u/Naurgul, two sources told Axios that Anthropic was asked if their software was involved, which sparked some serious concern inside the Department of War. Now, here’s where it gets interesting — Anthropic, known for its safety-first stance, is negotiating with the Pentagon to set clear boundaries. They want to avoid their tech being used for mass surveillance or autonomous weapons. But here's the thing — while Axios reports that Claude was actively used during the raid, it’s still not clear exactly how much influence it had on the operation. What this all points to is a bigger question: How far will the military go in integrating cutting-edge AI, and at what cost to safety and ethics? According to /u/Naurgul, these developments show just how deeply AI is creeping into battlefield decisions — and that’s a conversation we all need to watch.
Audio Transcript
| The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios. "Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was," the official said. The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law. Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it. Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons. [link] [comments] |
