AI & SocietyLumiChats Team·April 6, 2026·9 min read

You're Paying $20/Month. ChatGPT Still Trains on You.

A Stanford study confirmed all six major AI companies use your chat data by default. Claude quietly changed its terms in October 2025 and defaulted millions of users into 5-year data retention. A 2026 federal court ruled AI conversations have no legal confidentiality protection. Here's exactly what each platform does with what you type — and the three settings to change right now.

5.6K students read·Share:
⚡ Quick Answer: Paying $20/month for ChatGPT Plus or Claude Pro does NOT protect your privacy by default. Both platforms train on your conversations unless you manually opt out. Claude's October 2025 policy change defaulted users who didn't respond to a prompt into 5-year data retention. A 2026 federal court ruled (United States v. Heppner) that AI conversations carry no legal confidentiality protection. Three settings to change today: (1) ChatGPT: Settings > Data Controls > Improve the model for everyone → Off. (2) Claude: Privacy Settings > Help Improve Claude → Off. (3) Gemini: Keep Activity → Off.

The Policy Change That Affected Millions and Made Almost No Headlines

In September and October 2025, Anthropic quietly updated its Consumer Terms and Privacy Policy for Claude. The change required every existing user to make a choice: allow their conversations to be used for model training, with data retained for up to five years, or opt out and continue with the existing 30-day retention period. Users who did not respond to the notification by October 8, 2025 were defaulted into consent. That default means millions of Claude users — people who never saw the notification, dismissed it without reading it, or simply didn't understand what they were agreeing to — are now sharing their conversations with Anthropic for up to five years. This is not a bug, and it is not unique to Anthropic. It is the business model.

Stanford researcher Jennifer King, a Privacy and Data Policy Fellow at Stanford HAI, studied the privacy policies of six major US AI companies — Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI — and reached a conclusion she described with unusual directness: 'Absolutely yes, if you share sensitive information in a dialogue with ChatGPT, Gemini, or other frontier models, it may be collected and used for training, even if it's in a separate file that you uploaded during the conversation.' The Stanford study found all six companies employ users' chat data by default to train their models. Some developers keep this information in their systems indefinitely. Most of this is disclosed — in privacy policies written in legalese that most users never read.

What Each Major Platform Actually Does With Your Data

Platform / TierDefault TrainingData RetentionHow to Opt Out
ChatGPT FreeYes — trains by defaultConversations retained while account is active; 30 days after deletionSettings > Data Controls > Improve the model for everyone → Off
ChatGPT Plus ($20/mo)Yes — same as free. Paying does NOT change the default.Same as free tierSame setting — must be changed manually. The upgrade buys you better models, not privacy.
ChatGPT Team / EnterpriseNo — training prohibited by contractGoverned by Data Processing AgreementProtected by default. No opt-out required.
Claude FreeYes if you accepted October 2025 terms30 days (opt-out) or up to 5 years (training enabled)Privacy Settings > Help Improve Claude → Off. Check Incognito mode for sensitive conversations.
Claude Pro ($20/mo)Yes — same as free. Pro tier is treated identically to Free for data purposes.30 days (opt-out) or up to 5 years (training enabled)Same setting. Pro purchases increased usage limits and model access. Not data protection.
Claude Team / EnterpriseNo — training prohibited by defaultGoverned by Commercial TermsProtected by default.
Gemini Free / AdvancedYes — trains by defaultUp to 18 months for conversation data; human review possible for up to 3 yearsTurn off 'Keep Activity' in Gemini settings. Google advises not to enter confidential information regardless.
Grok Premium ($30/mo)Yes — aggressive training default, including scraping your X/Twitter postsNot clearly specifiedNo clear consumer opt-out. Paying buys no default privacy protection.

The Court Ruling That Changes How You Should Think About AI Conversations

In February 2026, United States v. Heppner established that conversations with AI assistants are not protected by attorney-client privilege and do not constitute work product. The court's reasoning: provider policies allow potential disclosure to governmental authorities and may permit use for model improvement. The court found no reasonable expectation of confidentiality. This applies across all providers — any AI conversation on a consumer plan is potentially discoverable in legal proceedings. Lawyers, accountants, therapists, and anyone else whose conversations with clients carry legal privilege have been explicitly warned: do not use consumer AI tiers for any professional communication that involves privileged information.

What You're Actually Risking When You Don't Opt Out

  • Work secrets in training data: If you paste confidential company information, client data, or strategic plans into ChatGPT Plus or Claude Pro without opting out, that information enters training pipelines. Once embedded in a model, it cannot be fully removed — even deleting the conversation leaves the model's parameters shaped by the content. A 2025 security incident at McDonald's exposed 64 million job applicants' data through an AI recruitment chatbot using the default password '123456.' Enterprise deployment security and consumer AI account security are different universes.
  • Medical and mental health disclosure: The 12% of US teenagers who use AI for emotional support, and the 1.2 million ChatGPT users who discuss suicide weekly (disclosed by OpenAI in October 2025), are sharing deeply sensitive health information on consumer plans with no clinical data protection. HIPAA does not apply to AI platforms. Data shared about your mental health, medical conditions, or treatment on consumer AI is unprotected by health privacy law.
  • Legal privilege destruction: If you are involved in a legal matter and use consumer AI to think through the case, draft communications, or seek information — the resulting conversations are not privileged. They can be subpoenaed. Law firms that allow attorneys to use ChatGPT Plus or Claude Pro for client work are exposing client privilege without realizing it.
  • Children's data: Anthropic says it does not allow users under 18 to create accounts but does not require age verification. OpenAI and Microsoft both acknowledge collecting data from minors in certain circumstances. The Stanford study found developers' practices 'vary' on removing children's input from model training — and that children cannot legally consent to this collection.

The Three Settings to Change Right Now (Under 5 Minutes)

The good news is that opt-out exists at all three major platforms, and it takes less than five minutes to configure. The bad news: finding the settings is not intuitive, and the defaults were designed to maximize data collection, not protect user privacy. Here is exactly where to go on each platform.

  • ChatGPT: Click your profile icon (bottom left) > Settings > Data Controls > 'Improve the model for everyone' > Toggle off. This prevents future conversations from being used in training. It does not delete data already collected. If you want to delete conversation history: same menu > Delete all chats. OpenAI takes up to 30 days to fully process deletion.
  • Claude: Click your avatar (top right on claude.ai) > Privacy Settings > 'Help Improve Claude' > Toggle off. For sensitive individual conversations without changing your global setting: use Incognito mode (the ghost icon in the top right of a new chat window). Incognito conversations are never used for training regardless of your global setting, and have a shorter retention window.
  • Gemini: In the Gemini app or gemini.google.com > Settings > Gemini Apps Activity > Turn off. Note: turning off activity means you lose conversation history. If you need history and privacy, Google advises using the enterprise Gemini Workspace tier, which has contractual data protection guarantees that the consumer tier does not.

If You Use AI at Work: The Setting That Actually Matters Most

If your employer reimburses your ChatGPT Plus or Claude Pro subscription for use with client work, they are unknowingly funding data exposure. Consumer 'Pro' accounts provide no contractual data protection. The only plans that genuinely protect business data are ChatGPT Team ($30/user/month) or Claude Team ($25/user/month) — both of which prohibit training on customer content by contract, not just by a toggle you might forget to flip. Organizations that have deployed consumer AI accounts at scale — even well-intentioned ones — are operating with a significant hidden risk that most legal and IT departments have not fully accounted for.

📚 Read Next

Understanding which AI handles your data most carefully requires testing them directly. At LumiChats, you can compare Claude Sonnet 4.6 and GPT-5.4 side by side and see exactly how each model responds — with privacy settings you control — before committing to either platform.

Found this useful? Share it with a friend 👇

Ready to study smarter?

Try LumiChats for 82¢/day

40+ AI models including Claude, GPT-5.4, and Gemini. Smart Study Mode with source-cited answers. Pay only on days you use it.

Get Started — 82¢/day

Keep reading

More guides for AI-powered students.