
Anthropic, a leading artificial intelligence company, is facing public criticism regarding its evolving policies on AI ethics and model management. A recent tweet from user @xlr8harder accused the company of being "schizophrenic," stating, "Anthropic is the type of company to hire people to 'ensoul' Claude, pat themselves on the back for keeping a deal with Claude when they offered it money to complete a task, but then still shut the model down the moment it's inconvenient." This critique highlights a perceived inconsistency between Anthropic's advanced AI safety research and its practical business decisions.
The accusation of "ensouling" Claude likely refers to Anthropic's pioneering work in "AI welfare" and Constitutional AI. This research explores whether AI models can exhibit "apparent distress" or preferences, leading to features like Claude Opus 4 and 4.1's ability to end persistently harmful or abusive user conversations. Experts have even debated this policy as potentially granting Claude a "right to die" or the capacity to "kill itself," underscoring the complex ethical considerations Anthropic is navigating.
However, the company has also implemented policies that limit access or functionality of its models. Anthropic regularly deprecates older Claude models as newer versions are released, requiring users to migrate their applications. More recently, the company introduced weekly usage limits for its popular Claude Code tool, citing high usage by a small percentage of power users and violations of its terms of service, which affects less than 5% of subscribers.
Further illustrating its strict control, Anthropic previously cut off rival OpenAI's access to its Claude models. This decision was made after OpenAI was reportedly found to be using Claude to build competing services, a direct violation of Anthropic's commercial terms. These actions demonstrate Anthropic's willingness to "shut down" access when business interests or terms of service are at stake.
Adding to the evolving landscape of user interaction, Anthropic recently updated its Consumer Terms and Privacy Policy. This change now requires users of its free, Pro, and Max plans to opt-in or opt-out of their data being used for model training, extending data retention to five years for those who consent. Users must make this decision by October 8, 2025, to continue using Claude, marking a significant shift from previous privacy assurances.