Just over a year after President Trump welcomed AI firms into government, the White House's unprecedented reach for personal data has left some technology leaders at odds with the administration.
Anthropic and the Department of Defense (DOD) butted heads over the extent to which the company's AI tools could be used to conduct surveillance and compile information about U.S. citizens and residents — a redline for the company's CEO, Dario Amodei.
The dispute cost Anthropic its government contract and spurred a legal battle over the company's designation as a national security threat.
"Frontier AI fundamentally changes the surveillance calculus," David Bader, a professor at the New Jersey Institute of Technology, told The Hill. "Analyzing billions of data points to build profiles on millions of Americans used to be computationally impractical, but now it's trivia with AI, and the law hasn't caught up to that reality."
From the start of negotiations, Amodei said AI-driven mass surveillance is "incompatible" with democratic values, warning it presents "serious, novel risks to our fundamental liberties."
Anthropic, which worked with the Pentagon as a subcontractor of data analytics firm Palantir since 2024, pressed for specific restrictions on mass domestic surveillance, with the company suggesting some users are "outside the bounds" of what current technology can "safely and reliability."
The DOD insisted on using an "all lawful purposes" standard and leaders alleged Anthropic sought to "personally control" the U.S. military and jeopardize national security.
Failing to come to an agreement, President Trump ordered federal agencies to stop using Anthropic products and Defense Secretary Pete Hegseth issued a rare supply chain risk designation for the company.
Oliver Stephenson, the associate director for AI and Emerging Technology Policy at the think tank Federation of American Scientists, explained that the data collected by the government can be inputted into AI tools and produce "incredibly detailed inferences about people."
He pointed to recent research showing how large language models can be used to identify the authors of purportedly anonymous online posts, "matching what would take hours for a dedicated human-investigator."
"It's not just data that's showing anonymous patterns of life," Stephenson added, "We have transitioned from a world in which the limitation used to be on collection, and is now on analysis capabilities."
Check out the full report at TheHill.com.
No comments:
Post a Comment