The Pentagon uses Anthropic's AI system, Claude, for its classified systems.
But Anthropic has two rules for the Pentagon: Claude will not be used in autonomous weapons or in mass surveillance of Americans.
Well, the Defense Department does not want to agree to those rules — not because it necessarily wants to use Claude for those purposes, but it does not want guardrails from the company requiring it first get permission from Anthropic in a national security situation.
Under Secretary of State Sarah B. Rogers argued that "the contractor can't have procedural carte blanche to cut the cord if there's a dispute."
This standoff has caused quite the showdown: The Pentagon has given Anthropic until 5 p.m. today to agree to its demands — or else it said it will effectively blacklist the company or force it to comply. Anthropic has been threatened with being labeled as a "supply chain risk."
To give you an idea of how tense this dispute has been: A top Pentagon official accused Anthropic CEO Dario Amodei on Thursday of having a "God-complex" and being a "liar." 🔎 Read the full post
What's happening behind the scenes: OpenAI CEO Sam Altman told staff on Thursday evening that his company is working on a deal with the Pentagon that could solve the dispute, according to new reporting from The Wall Street Journal's Keach Hagey.
Related read from The Washington Post: 'The hypothetical nuclear attack that escalated the Pentagon's showdown with Anthropic'
Excerpt: "A defense official said the Pentagon's technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic's Claude AI system to help shoot it down? … Anthropic chief executive Dario Amodei's answer rankled the Pentagon, according to the official, who characterized the CEO's reply as: You could call us and we'd work it out."
No comments:
Post a Comment