Claude now remembers you. Not in a vague 'context from this conversation' sense — but across conversations, across days and weeks, in a way that makes Claude progressively more useful the more you use it. Anthropic rolled out memory to all Claude users (free and paid tiers) in early March 2026. If you use Claude at claude.ai or through the Claude mobile app, memory is active by default right now. This guide explains exactly what Claude stores, what it does not store, how to see and manage everything in your memory, how it compares to ChatGPT's memory feature, and the prompts that make memory genuinely transform how you work and study.
What Claude Actually Stores in Memory
Claude's memory is not a transcript or conversation log. It is a set of derived facts and preferences that Claude extracts from your conversations and stores for future reference. Think of it less like a recording and more like the notes a good doctor takes between appointments — the key information that makes the next interaction more useful.
- Personal preferences: Your preferred language (formal or casual), preferred response length (concise vs comprehensive), preferred formats (bullet points vs prose vs code blocks), and any explicit preferences you have expressed ('I prefer explanations without jargon', 'always give me the TL;DR first').
- Context about your work and study: Your programming languages, tech stack, frameworks you use, subjects you are studying, your year and major if you are a student, your job or field if you have shared it.
- Facts you have shared: Your name if you have used it, your location if relevant to your queries, specific ongoing projects you have mentioned ('I am building a SaaS product for restaurant inventory management').
- Working style preferences: Whether you want Claude to ask clarifying questions or just attempt the task, whether you want it to explain its reasoning or just give the output, whether you want it to point out potential problems in your approach or just execute the request.
- What Claude does NOT store: The content of individual conversations in detail. Sensitive information like passwords, bank details, or personal identification numbers (Claude is designed not to accept or store these). Information you shared in conversations before memory was enabled.
How to See Exactly What Claude Has Stored About You
- In Claude.ai (desktop/browser): Click your profile icon in the bottom-left corner of the interface. Select 'Memory' from the menu. You will see a list of all stored memories with the option to delete individual items or clear all memories at once.
- In the Claude mobile app (iOS/Android): Tap the three-line menu icon in the top-left. Go to Settings > Memory. The same list appears with individual delete and clear-all options.
- Ask Claude directly: Type 'What do you remember about me?' Claude will list everything currently in its memory in that conversation. This is the fastest way to audit your memory without navigating menus.
- Edit via conversation: You can correct or update memories by telling Claude directly: 'I am no longer working on that restaurant project — I have switched to a healthcare app.' Claude will update its memory accordingly.
Claude Memory vs ChatGPT Memory: The Real Differences
| Feature | Claude Memory | ChatGPT Memory |
|---|---|---|
| Who gets it | All users — free and paid — as of March 2026 | All users — free and Plus — since 2024 |
| What it stores | Preferences, context, stated facts, working style | Preferences, context, stated facts, working style — similar scope |
| User control | View, delete individual items, or clear all from settings | View, edit, delete individual items, or clear all from settings |
| Privacy architecture | Memories stored server-side by Anthropic, not shared across users | Memories stored server-side by OpenAI, not shared across users |
| Memory during conversation | Claude explicitly tells you when it creates or uses a memory | ChatGPT is less explicit — memories are used silently |
| Opt-out | Can be disabled entirely in settings — all future conversations start fresh | Can be disabled entirely in settings |
| Temporary chat | Available — conversations in temporary mode do not create memories | Available — temporary chats bypass memory |
The Memory Prompts That Change How You Study and Work
The first conversation you have with Claude after enabling memory is the most valuable investment you can make. Use it to set up your memory intentionally rather than letting Claude infer your preferences piecemeal over many conversations.
- For students: 'I want you to remember the following about me as a student: I am in [year] studying [subject] at [type of institution]. My strongest areas are [X] and [Y]. I consistently struggle with [Z]. I learn best from concrete examples before abstract definitions. I am preparing for [specific exam or goal]. Please use this context for all future study sessions.' Claude will store all of this and every future tutoring session starts from this baseline.
- For developers: 'Please store this about my work setup: I primarily use [language] with [framework] on [OS]. My current main project is [brief description]. I prefer code explanations that focus on the why, not just the what. When I ask for code, assume I want it production-ready with error handling unless I say otherwise.' This eliminates repeating context in every new coding conversation.
- For writers: 'Remember my writing voice: I write [formally/casually], I prefer [long/short] sentences, I avoid [jargon/passive voice/specific words I overuse]. My target reader is [audience]. When editing my work, always prioritise clarity over elegance.' Claude will apply this voice consistently across all future editing and writing assistance.
- Updating memory mid-project: 'I have finished the authentication module I mentioned before. The new focus is the payment integration using Razorpay for Indian users. Please update what you remember about my current project.' This keeps Claude's context current as your work evolves.
Privacy: What Anthropic Does With Your Memory Data
Memory data is stored server-side by Anthropic and is subject to Anthropic's privacy policy. Key points: your memories are private to your account and not shared with other users. Anthropic may use conversation data to improve Claude's models unless you opt out — memory content follows the same data usage policy as the rest of your conversations. You can delete all memories at any time. Deleting your Claude account deletes all associated memory data. For sensitive professional contexts — legal, medical, financial — the guidance is the same as for any AI tool: do not share genuinely confidential information, even in a memory-setting conversation.
Pro Tip: The most powerful memory use case that most users miss: project continuity. Before you end a working session on a complex project, tell Claude what to remember: 'Please store this: we are in the middle of building the recommendation engine. The current blocker is the cold-start problem for new users. We decided to use collaborative filtering with a content-based fallback. The next step is implementing the similarity matrix.' When you return tomorrow or next week, Claude picks up exactly where you left off — no re-explaining, no lost context.