AI & SocietyLumiChats Team·April 5, 2026·12 min read

MIT Found That 83% of ChatGPT Users Couldn't Remember What They'd Just Written. Then the Researcher Had to Clarify What the Study Did Not Show. Both Facts Matter.

In June 2025, MIT Media Lab researchers put EEG sensors on 54 people and found that those who used ChatGPT showed the weakest neural connectivity and the worst memory recall — 83% couldn't remember what they'd written. The headlines screamed 'AI is making us dumber.' Then lead researcher Nataliya Kosmyna publicly clarified that the study did not show that AI made participants 'stupid,' 'dumb,' or caused 'brain rot.' Both of those things are true. Here is what the research actually says, what it doesn't, and the specific way of using AI that protects your cognitive skills while still benefiting from everything AI offers.

7.2K students read·Share:
⚡ The finding in one sentence: Passive AI use — letting AI do the thinking for you — reduces cognitive engagement and memory retention for the task at hand. Active AI use — using AI as a thinking partner while you remain cognitively engaged — appears to enhance output quality without the same cost. The study is real. The 'AI is rotting your brain' interpretation is wrong. The concern it raises is legitimate. Here is exactly what the difference means for how you work.

In June 2025, MIT Media Lab researcher Nataliya Kosmyna and her team published a pre-print titled 'Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.' They recruited 54 participants — mostly college students in Boston — and divided them into three groups: one wrote essays using ChatGPT, one used Google Search, one wrote with no tools at all. Each wore an EEG headset measuring brain activity across 32 neural regions. Each essay was scored by human teachers and an AI judge. The results: ChatGPT users showed the weakest neural connectivity, the lowest cognitive engagement, and the worst memory recall. When asked to reproduce what they had written, 83% of ChatGPT users could not. The brain-only group had the strongest neural activity and the best recall. The search engine group sat in the middle. The headlines wrote themselves: 'AI Is Making Us Dumber.' 'ChatGPT Is Rotting Your Brain.' 'MIT Proves AI Destroys Critical Thinking.' Then Kosmyna herself had to step in and clarify that the study showed nothing of the sort.

What the Study Actually Found — and What It Did Not

Kosmyna publicly stated that her study did not find that AI made participants 'stupid,' 'dumb,' or caused permanent damage to critical thinking or intelligence. What it found was more specific: when people outsource a cognitive task entirely to an AI, they engage less cognitively with it, and they retain less of what the AI produced. This is not surprising. If you hand a task to someone else entirely, you put less mental effort into it. What the study does not — and cannot — tell you is whether AI use in general reduces cognitive ability over time. That is a different question, and the study did not test it. AI researcher Ethan Mollick, one of the most respected voices on AI in education, described the mainstream coverage as a 'massive misinterpretation,' noting that 'It says something important about cheating with AI, but it doesn't tell us anything about LLM use making us dumber overall.'

The Studies That Surround It: A Fairer Picture

StudyWhat It FoundWhat It Actually Means
MIT 'Your Brain on ChatGPT' (June 2025)ChatGPT users showed weaker neural connectivity and 83% couldn't recall what they wrote. When those same users switched to writing independently, their brain engagement dropped further — suggesting some dependency had formed.Passive AI use (outsourcing the task) reduces engagement with that task. The concerning part: LLM-to-brain participants showed weaker performance than brain-only participants, suggesting the dependency effect is real for heavy passive users.
Harvard Study on AI and Motivation (2025)3,500+ participants: AI assistance made workers more productive but reduced intrinsic motivation. 'If employees consistently rely on AI for creative tasks, they risk losing the aspects of work that drive engagement, growth, and satisfaction.'Productivity gains are real. Motivation losses are also real. The question is whether the productivity is worth the motivational cost for your specific role.
Microsoft + Carnegie Mellon on Critical Thinking (2025)As trust in AI increases, critical evaluation decreases. Workers with higher self-confidence engage more critically with AI outputs.The risk is not AI — it is uncritical acceptance of AI outputs. The solution is explicit verification habits, not AI avoidance.
AI Math Tutoring Study (2025)Students using AI tutoring scored 48% better on practice problems — but 17% worse on actual exams when AI was unavailable.This is the clearest real-world warning: AI-assisted practice without genuine cognitive engagement creates false mastery. You can get good scores while the underlying skill atrophies.
MIT Study — the finding most ignored in coveragePeople who started by writing independently and then used ChatGPT showed HIGHER creativity and STRONGER arguments while retaining original thinking.The sequence is the key variable: engage first, then use AI. The cognitive cost comes from starting with AI, not from using it at all.

The Three Gaps That Explain Everything

Marketing AI Institute's Mike Kaput articulated the framework that makes sense of all of this research. There are three gaps created by AI use that compound if you are not deliberate about addressing them. The Verification Gap: AI generates output, but you must fact-check it. Most people skip verification unless they already suspect an error. The errors you do not suspect are the costly ones. The Thinking Gap: evaluating AI output critically requires cognitive capacity that is not unlimited. Under time pressure, the temptation is to accept what the AI produced rather than engage with it. The Confidence Gap: when you present AI-generated material without deeply engaging with it, you develop a false confidence that collapses the moment someone asks you to explain it without the AI.

These gaps do not mean AI is making you dumber. They mean passive AI use — without deliberate engagement — creates conditions for skill atrophy. The distinction is yours to make every time you open a prompt box. The tool does not make the choice. You do.

What Dario Amodei Does — And Why It Matters

Anthropic CEO Dario Amodei, who leads the company that makes Claude, described a personal rule that captures the entire debate in one observation. He still types out his own notes in every meeting rather than using an AI notetaker. 'If I just have the notetaker, there's less cognitive load,' he said. 'But that cognitive load is actually what embeds it in my memory networks.' The person who runs one of the world's most powerful AI companies understands that the cognitive friction of note-taking is not a bug — it is the mechanism of memory formation. He chooses to preserve that friction deliberately. That is the right frame for every knowledge worker making decisions about AI use.

The Practical Framework: How to Use AI Without Paying the Cognitive Price

  • Engage first, AI second — every time: The MIT study's most important buried finding is that people who wrote independently first and then used ChatGPT produced the best creative work — stronger arguments, higher originality, better recall. Spend 5-10 minutes engaging with any task before asking AI to assist. Write your initial thoughts, identify the questions you're trying to answer, form your first hypothesis. The AI then extends your thinking rather than replacing it.
  • Treat verification as non-negotiable, not optional: The Microsoft/Carnegie Mellon finding that high AI trust correlates with reduced critical evaluation is the most actionable piece of research in this entire space. Make a rule: for any AI-generated content that will be shared, published, or acted on, you verify the key claims, numbers, and logic before using it. Not because you distrust the AI — because verification is the cognitive engagement that keeps you from being fooled by a plausible-sounding error.
  • Know which tasks are worth outsourcing and which are not: There is a meaningful difference between outsourcing the formatting of a bibliography (low cognitive value, appropriate to outsource) and outsourcing the argument structure of an analysis (high cognitive value, the part you need to engage with). Map your work to these categories. Use AI heavily on the low-value-to-your-development tasks. Protect your cognitive engagement on the high-value ones.
  • The student test: The AI math tutoring study's 17% exam performance drop is the clearest warning for learners. If you use AI to complete assignments, you will score well on those assignments. You will perform worse when AI is unavailable — exams, job interviews, live client situations. For anything you are trying to learn, not just complete, the correct rule is: understand it yourself first, use AI to check and extend, never to replace the initial understanding.
  • Periodic AI-free zones: The researchers found that LLM-to-brain participants showed weakened performance even after switching back to independent work — suggesting dependency can develop. Build regular practices of doing cognitively demanding work without AI assistance. Writing, analysis, problem-solving, planning — not as productivity, but as maintenance of the cognitive capacity that AI use gradually erodes if never exercised independently.

The Longer View: What History Actually Suggests

Calculator use became widespread in schools in the 1970s and 1980s. Arithmetic fluency declined. Computational speed — the ability to do mental multiplication quickly — atrophied in most people who went through calculator-enabled education. This is measurable and real. And it turned out that fast mental multiplication was less important to the overall trajectory of human intellectual capability than the educators who worried about it believed. The more interesting question about AI is the same one: what does the tool make possible that was not possible before, and is the gain worth the cognitive dependency it creates? The research so far is early, limited in scope, and frequently misrepresented. The MIT study is one specific finding about one specific task with 54 participants. What we can say accurately in April 2026 is this: the concern is legitimate, the 'AI is rotting your brain' headline is wrong, and deliberately cultivating active rather than passive AI use is both the prudent response to the concern and the correct way to get the most from the technology.

Pro Tip: The simplest rule that covers everything: before you ask an AI to generate, draft, or analyze something, spend 5 minutes on it yourself first. Write your initial take. Note your questions. Form a hypothesis. This 5-minute investment changes the entire nature of your interaction with the AI from passive outsourcing to active collaboration — and dramatically increases how much you retain and how critically you evaluate the output. It is not the only thing needed, but it is the single change that produces the biggest improvement in cognitive engagement for most AI users.

Found this useful? Share it with a friend 👇

Ready to study smarter?

Try LumiChats for 82¢/day

40+ AI models including Claude, GPT-5.4, and Gemini. Smart Study Mode with source-cited answers. Pay only on days you use it.

Get Started — 82¢/day

Keep reading

More guides for AI-powered students.