CW Pakistan
  • Legacy
    • Legacy Editorial
    • Editor’s Note
  • Academy
  • Wired
  • Cellcos
  • PayTech
  • Business
  • Ignite
  • Digital Pakistan
  • DFDI
  • PSEB
  • PASHA
  • TechAdvisor
  • GamePro
  • Partnerships
  • PCWorld
  • Macworld
  • Infoworld
  • TechHive
  • TechAdvisor
0
0
0
0
0
Subscribe
CW Pakistan
CW Pakistan CW Pakistan
  • Legacy
    • Legacy Editorial
    • Editor’s Note
  • Academy
  • Wired
  • Cellcos
  • PayTech
  • Business
  • Ignite
  • Digital Pakistan
  • DFDI
  • PSEB
  • PASHA
  • TechAdvisor
  • GamePro
  • Partnerships
  • Global Insights

Anthropic Users Must Choose To Opt Out Or Share Chats For AI Training

  • August 28, 2025
Total
0
Shares
0
0
0
Share
Tweet
Share
Share
Share
Share

Anthropic is changing its data usage policies and is now requiring all Claude users to make a clear decision by September 28 on whether their conversations can be used to train its AI models. This represents a major shift, as the company previously refrained from using consumer chats for training purposes. Now, Anthropic intends to use both conversations and coding sessions to improve its systems while also extending data retention periods for those who do not opt out to as long as five years.

For years, Anthropic assured users that their prompts and outputs would be automatically deleted within 30 days, with only flagged or policy-violating content retained for up to two years. This sudden extension in retention duration signals a broader change in approach. Notably, these updates apply specifically to consumer services such as Claude Free, Pro, Max, and Claude Code. Business-focused offerings like Claude Gov, Claude for Work, Claude for Education, and API access remain unaffected. In this sense, Anthropic is following a strategy similar to OpenAI, which also exempts enterprise customers from data training requirements.

In its communication, Anthropic has framed the new policy as an effort to provide users with choice while presenting it as a way to improve safety and model accuracy. According to the company, conversations that are shared for training help its systems better detect harmful content and improve overall reasoning, coding, and analytical skills in future model iterations. However, many industry observers believe the real motivation lies in the need for vast amounts of high-quality conversational data. With competitors like OpenAI and Google racing ahead, access to millions of real user interactions gives Anthropic a valuable edge in refining its models.

The policy update also highlights wider tensions across the AI industry as companies struggle with questions of transparency, data usage, and privacy. OpenAI itself is dealing with a court order requiring it to retain all consumer ChatGPT conversations indefinitely, including deleted chats, due to a lawsuit brought by The New York Times and other publishers. OpenAI’s COO Brad Lightcap has called this requirement unnecessary and at odds with the company’s promises of user privacy. Like Anthropic, OpenAI shields its enterprise customers through zero-data-retention agreements, leaving consumer users most exposed to sweeping data policies.

Anthropic’s implementation of this update has drawn further criticism for how it is presented to users. New users will be asked their preference during signup, but existing users are shown a pop-up with the headline “Updates to Consumer Terms and Policies.” The screen includes a large “Accept” button, while the toggle to opt out of training is smaller, positioned below, and set to “On” by default. Analysts argue this design could lead users to click accept without realizing they are consenting to share their conversations for training.

Privacy experts have long warned that the complexity of AI systems makes meaningful user consent nearly impossible. Regulators in the United States, including the Federal Trade Commission, have already cautioned AI companies against hiding important policy shifts in fine print or hyperlinks. Although the FTC has taken such positions in the past, its current level of oversight remains unclear, raising doubts about whether stronger enforcement will follow.

Ultimately, Anthropic’s move underscores both the industry’s need for data and the growing challenge of maintaining trust with users. While the company insists the choice is up to its customers, the policy reflects the wider reality that in today’s AI race, user data remains one of the most valuable resources.

Source

Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem. 

Share
Tweet
Share
Share
Share
Related Topics
  • AI training
  • Anthropic
  • Claude
  • data privacy
  • FTC
  • Google
  • OpenAI
  • tech policy
  • User Choice
Previous Article
  • Cellcos

PTCL Group Reports 16% Revenue Growth Driven By Broadband, 4G And Enterprise Services

  • August 28, 2025
Read More
Next Article
  • Global Insights

Investors Show Strong Interest In Swedish Startup Lovable At $4 Billion Valuation

  • August 28, 2025
Read More
You May Also Like
Read More
  • Global Insights

Delphi-2M AI Model Predicts Diseases Years In Advance Using Medical Histories

  • Press Desk
  • September 22, 2025
Read More
  • Global Insights

Taliban Orders Internet Ban In Northern Afghanistan To Prevent Immoral Activities

  • Press Desk
  • September 21, 2025
Read More
  • Global Insights

US And China Reach Agreement To Keep TikTok Operating In America

  • Press Desk
  • September 20, 2025
Read More
  • Global Insights

UAE Grants Special Flight Permit To China’s XPENG AEROHT Electric Flying Car

  • Press Desk
  • September 19, 2025
Read More
  • Global Insights

UK-US Tech Deal Brings Major AI Investments And Energy Partnerships To Britain

  • Press Desk
  • September 18, 2025
Read More
  • Global Insights

Google DeepMind Gemini 2.5 Becomes First AI To Win Gold At International Programming Contest In Azerbaijan

  • Press Desk
  • September 18, 2025
Read More
  • Global Insights

Xiaomi Unveils Fully Automated Factory Producing One Smartphone Per Second

  • Press Desk
  • September 15, 2025
Read More
  • Global Insights
  • Ignite

The Grove Experiment: OpenAI’s Bet on Building Minds Before Companies

  • Press Desk
  • September 15, 2025
Trending Posts
  • Google AI Plus Plan Rolls Out In Pakistan Offering Enhanced AI Tools And Storage
    • September 25, 2025
  • Karachi Traffic Police And Pakistan Post Launch Home Delivery Of E-Challans
    • September 25, 2025
  • Pakistan’s Best Mobile Networks Ranked In Latest Ookla Report H1 2025
    • September 25, 2025
  • KP Government Distributes Tablets To Students Under E-Basta Project
    • September 25, 2025
  • Sindh Govt Launches Pink e-Bikes Program For Women And Students
    • September 25, 2025
about
CWPK Legacy
Launched in 1967 internationally, ComputerWorld is the oldest tech magazine/media property in the world. In Pakistan, ComputerWorld was launched in 1995. Initially providing news to IT executives only, once CIO Pakistan, its sister brand from the same family, was launched and took over the enterprise reporting domain in Pakistan, CWPK has emerged as a holistic technology media platform reporting everything tech in the country. It remains the oldest continuous IT publishing brand in the country and in 2025 is set to turn 30 years old, which will be its biggest benchmark and a legacy it hopes to continue for years to come. CWPK is part of the SPIN/IDG Wakhan media umbrella.
Read more
Explore Computerworld Sites Globally
  • computerworld.es
  • computerworld.com.pt
  • computerworld.com
  • cw.no
  • computerworldmexico.com.mx
  • computerwoche.de
  • computersweden.idg.se
  • computerworld.hu
Content from other IDG brands
  • PCWorld
  • Macworld
  • Infoworld
  • TechHive
  • TechAdvisor
CW Pakistan CW Pakistan
  • CWPK
  • CXO
  • DEMO
  • WALLET

CW Media & all its sub-brands are copyrighted to SPIN-IDG Wakhan Media Inc., the publishing arm of NCC-RP Group. This site is designed by Crunch Collective. ©️1995-2025. Read Privacy Policy.

Input your search keywords and press Enter.