San Francisco, CA – AI startup Anthropic has reportedly refused requests from federal law enforcement agencies to utilize its Claude AI models for surveillance tasks, a decision that has “irked” and “deepened hostility” within the White House. The company’s strict usage policies prohibit the application of its artificial intelligence for domestic surveillance, including tracking individuals without consent, facial recognition, or predictive policing. This stance places Anthropic at odds with the Trump administration, which champions American AI companies and expects their cooperation with government initiatives.
According to a report by Semafor, contractors working with federal law enforcement, including agencies like the FBI, Secret Service, and Immigration and Customs Enforcement (ICE), encountered roadblocks when attempting to deploy Claude for surveillance. Anthropic's policy explicitly states, "Do Not Use for Criminal Justice, Censorship, Surveillance, or Prohibited Law Enforcement Purposes," and outlines prohibitions against targeting or tracking individuals' physical location or emotional state. This firm position has led some White House officials to express concerns that Anthropic is making moral judgments about law enforcement operations.
While Anthropic offers its "Claude for Government" service to federal agencies for a nominal fee and collaborates with the Department of Defense on non-weapons applications, its refusal to permit surveillance use contrasts with other AI providers. Competitors such as OpenAI, for instance, have policies that prohibit "unauthorized monitoring of individuals," implying potential carve-outs for legally sanctioned surveillance activities. Administration officials have also voiced apprehension regarding the perceived vagueness of Anthropic’s definition of "domestic surveillance," suggesting it could be broadly or selectively applied.
Anthropic recently updated its Usage Policy to provide greater clarity, affirming its continued restriction of surveillance, tracking, profiling, and biometric monitoring, while still supporting appropriate back-office and analytical use cases. The company has declined to comment directly on the specific incident. This ongoing tension highlights the complex ethical and operational challenges at the intersection of advanced AI technology and national security interests.