Anthropic is changing its data usage policies and is now requiring all Claude users to make a clear decision by September 28 on whether their conversations can be used to train its AI models. This represents a major shift, as the company previously refrained from using consumer chats for training purposes. Now, Anthropic intends to use both conversations and coding sessions to improve its systems while also extending data retention periods for those who do not opt out to as long as five years.
For years, Anthropic assured users that their prompts and outputs would be automatically deleted within 30 days, with only flagged or policy-violating content retained for up to two years. This sudden extension in retention duration signals a broader change in approach. Notably, these updates apply specifically to consumer services such as Claude Free, Pro, Max, and Claude Code. Business-focused offerings like Claude Gov, Claude for Work, Claude for Education, and API access remain unaffected. In this sense, Anthropic is following a strategy similar to OpenAI, which also exempts enterprise customers from data training requirements.
In its communication, Anthropic has framed the new policy as an effort to provide users with choice while presenting it as a way to improve safety and model accuracy. According to the company, conversations that are shared for training help its systems better detect harmful content and improve overall reasoning, coding, and analytical skills in future model iterations. However, many industry observers believe the real motivation lies in the need for vast amounts of high-quality conversational data. With competitors like OpenAI and Google racing ahead, access to millions of real user interactions gives Anthropic a valuable edge in refining its models.
The policy update also highlights wider tensions across the AI industry as companies struggle with questions of transparency, data usage, and privacy. OpenAI itself is dealing with a court order requiring it to retain all consumer ChatGPT conversations indefinitely, including deleted chats, due to a lawsuit brought by The New York Times and other publishers. OpenAI’s COO Brad Lightcap has called this requirement unnecessary and at odds with the company’s promises of user privacy. Like Anthropic, OpenAI shields its enterprise customers through zero-data-retention agreements, leaving consumer users most exposed to sweeping data policies.
Anthropic’s implementation of this update has drawn further criticism for how it is presented to users. New users will be asked their preference during signup, but existing users are shown a pop-up with the headline “Updates to Consumer Terms and Policies.” The screen includes a large “Accept” button, while the toggle to opt out of training is smaller, positioned below, and set to “On” by default. Analysts argue this design could lead users to click accept without realizing they are consenting to share their conversations for training.
Privacy experts have long warned that the complexity of AI systems makes meaningful user consent nearly impossible. Regulators in the United States, including the Federal Trade Commission, have already cautioned AI companies against hiding important policy shifts in fine print or hyperlinks. Although the FTC has taken such positions in the past, its current level of oversight remains unclear, raising doubts about whether stronger enforcement will follow.
Ultimately, Anthropic’s move underscores both the industry’s need for data and the growing challenge of maintaining trust with users. While the company insists the choice is up to its customers, the policy reflects the wider reality that in today’s AI race, user data remains one of the most valuable resources.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.