The Defense Department (DOD) warned Anthropic on Tuesday that it could invoke the DPA, which gives the president broad authority to control domestic industries in the name of national defense, to use the AI firm's tool on its own terms.
The threat marks an escalation in the feud between the two parties. Negotiations appear to be at a standstill, with the Pentagon giving Anthropic until Friday to comply with its terms or face a cancellation of a $200 million contract and risk being labeled a "supply chain risk" or confront the DPA.
"It's the wrong purpose of the tool," Mark Dalton, senior policy director for technology and innovation at the R Street Institute, told The Hill. "The DPA exists for a capacity reason, like it's an industrial capacity policy, and to use it as leverage is, I think, irresponsible."
Anthropic and the Pentagon have been locked in tense negotiations in recent weeks over the company's AI usage policy, which bars its AI model Claude from being used to conduct mass surveillance or develop lethal autonomous weapons.
These two issues have become the company's red lines in the dispute. A source familiar with negotiations told The Hill on Monday that Anthropic's resistance stemmed from concerns that AI systems are not reliable enough to make life-or-death decisions and the technology significantly changes what is possible with domestic surveillance.
Meanwhile, the Pentagon has pushed for the company to accept language that allows for "all lawful uses."
On Wednesday night, the DOD sent its last and final offer to Anthropic, asking the AI giant to allow the department to access Claude for "all lawful purposes," a senior Pentagon official told The Hill on Thursday. CBS News reported earlier on the offer.
The Hill has reached out to Anthropic for comment.
Sean Parnell, chief Pentagon spokesperson, noted Thursday that the department has "no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," he wrote in a post on social platform X.
"This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions."
Check out the full report at TheHill.com.
No comments:
Post a Comment