• +44(0)7855748256
  • bolaogun9@gmail.com
  • London

MIT researchers measured your brain on AI. The results should change how you work.

Most people using AI are getting worse at thinking. A study from MIT’s Media Lab has the data to prove it. But one system, pairing Google’s Gemini with NotebookLM, is designed to do the opposite: amplify your intelligence rather than quietly erode it.

This is not another tool comparison article. It is a framework question: are you using AI as a crutch, or as a cognitive lever? The difference compounds over time, and the gap between the two groups is already measurable at the neural level.


The MIT Study You Cannot Ignore

Researchers at MIT’s Media Lab ran a four-month study measuring what actually happens to your brain when you rely on AI to write. Fifty-four participants were split into three groups: ChatGPT users, search engine users, and people using nothing but their own cognition.

The findings were unambiguous. ChatGPT users showed the lowest brain engagement across all 32 EEG-monitored brain regions, consistently underperformed on memory recall, and by the final sessions many were simply copy-pasting AI output. They produced clean, grammatically polished essays. They learned almost nothing from writing them. When those same participants were later asked to write without AI, the degradation persisted. The brain had already begun to disengage.

The researchers coined a specific term for what is happening: cognitive debt. The idea is precise. Every time you outsource a thinking task to an LLM rather than working through it yourself, you avoid the neural effort that would have built a capability. That debt accumulates silently. Your outputs remain high quality. Your capacity to produce them independently quietly atrophies.

Lead researcher Nataliya Kos’myna, who rushed the paper out ahead of peer review out of urgency, put it plainly: “I am afraid there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ That would be absolutely bad and detrimental. Developing brains are at the highest risk.”

The study also surfaced something more nuanced. Participants who had spent three sessions doing brain-only work, then were switched to ChatGPT, used it significantly better. They asked sharper questions. They challenged outputs. They retained critical ownership. Strong foundational thinking made them better at using AI, not worse.

The implication is not “stop using AI.” It is: the order of operations matters enormously.


Two Modes of AI Use. Only One Builds You.

The problem is architectural, not motivational. Most AI workflows are designed around one question: how fast can I get an output? That framing guarantees passive consumption. You prompt, you receive, you move on. The thinking never happens.

The alternative architecture asks a different question: how do I use AI to extend and stress-test my own thinking? That framing forces engagement. You bring your own structure first. You use AI to interrogate it, fill gaps, surface contradictions, and generate material you then evaluate critically.

This is the distinction the Gemini and NotebookLM system is built around, and it is worth understanding precisely why these two tools, combined, create something neither achieves alone.


The System: What Gemini and NotebookLM Each Bring

Until recently, Google’s AI tools were powerful and weirdly siloed. NotebookLM was outstanding for source-grounded research but isolated from broader reasoning. Gemini was a capable general assistant but had no persistent project memory. In April 2025, Google’s Notebooks feature changed that: sources now sync bidirectionally between the two tools, and notebooks created in one appear automatically in the other.

Understanding the distinct character of each tool is essential to using the system well.

NotebookLM: Disciplined Source Intelligence

NotebookLM’s defining characteristic is that it will not hallucinate beyond your sources. That is not a limitation. It is the feature. When you upload documents, papers, transcripts, or notes and ask NotebookLM to analyse them, every response is grounded in exactly what you provided. It will tell you what your documents say, and stop there.

This discipline is genuinely hard to replicate in general-purpose chat. It eliminates one of the most dangerous failure modes in AI-assisted research: confident fabrication. NotebookLM generates audio overviews, study guides, FAQs, and briefing documents, all sourced directly from material you control. The Audio Overview feature, which produces a natural-sounding podcast-style dialogue from your sources, is particularly effective for high-density research comprehension.

Gemini with Notebooks: Dynamic Reasoning Over Persistent Context

Gemini adds what NotebookLM deliberately excludes: dynamic reasoning, live web access, and the ability to build outward from your existing material. Once a notebook is established, Gemini can draw on your saved chats, uploaded files, and prior work while also querying the open web. You can ask it to resume a project, extend an analysis, draft from existing notes, or cross-reference multiple notebooks simultaneously, something NotebookLM cannot do.

Gemini’s Canvas mode takes this further. It functions similarly to a document editor with AI capabilities embedded: quizzes, infographics, flashcards, structured drafts. Where NotebookLM’s Studio generates outputs and hands them to you, Canvas lets you interact with and reshape what is produced.

The Integration: One Knowledge Base, Two Reasoning Modes

The power of the combined system is that your research context no longer needs to be rebuilt for each tool. Add a source in Gemini and it surfaces in NotebookLM. Build a notebook in NotebookLM and Gemini can reference it for quick contextual questions without the overhead of opening a full research session. The same knowledge base supports both deep source-grounded analysis and dynamic creative synthesis.

You can combine multiple notebooks into a single Gemini chat session, enabling cross-project querying that neither tool supports independently. For anyone doing complex, multi-domain knowledge work, that is a non-trivial capability.


The Workflow That Actually Builds Intelligence

The system is only as valuable as how it is sequenced. Based on the principles the MIT study surfaces, and the capabilities each tool brings, the effective workflow looks like this:

Phase 1: Think First, Then Source

Before loading anything into NotebookLM, write down what you already know about the topic. Draft your current mental model. Note your uncertainties and the questions you need answered. This is the single step most practitioners skip, and it is the most cognitively important one. The MIT study’s most striking finding was that participants who did strong independent thinking first used AI significantly better when they finally engaged with it.

Your initial brain-only framing becomes the anchor that prevents passive drift into AI-generated output. It gives you something to test, challenge, and extend rather than something to receive and accept.

Phase 2: Build Your Source Base in NotebookLM

Load your primary sources: documents, research papers, technical specifications, transcripts, URLs. NotebookLM will stay grounded to exactly this material. Generate an Audio Overview to absorb the full research landscape in context, then use the FAQ and study guide features to surface the material’s structure. The goal at this stage is comprehension and challenge, not yet synthesis.

Ask NotebookLM to surface contradictions in your sources. Ask it to identify what your documents do not address. Use it to stress-test your Phase 1 assumptions against actual evidence.

Phase 3: Move to Gemini for Synthesis and Extension

Once your understanding of the source material is solid, shift to Gemini. This is where you build outward: draft documents, extend analysis with live web data, connect ideas across multiple notebooks, generate structured outputs. Because Gemini has access to your notebook context, it can reference your grounded research while also reasoning more expansively.

The critical discipline here is the same as in Phase 1: lead with your own reasoning. Tell Gemini what you think before asking what it thinks. Use prompts like “challenge my argument,” “identify what I’m missing,” and “what would a strong counterargument look like?” These force the AI into a role that sharpens your thinking rather than replacing it.

Phase 4: Validate and Own the Output

The MIT study found that ChatGPT users often could not accurately quote their own work. That is a red flag worth taking seriously. Before finalising anything, test yourself: can you explain this from memory? Can you defend the key claims without referencing the AI output? If not, you have consumed rather than learned.

Ownership of the output, the kind that supports genuine expertise, requires the cognitive struggle the study participants who relied only on AI consistently avoided.


What This System Does Not Solve

This is not a frictionless productivity upgrade. It is worth being direct about the gaps.

The Gemini and NotebookLM integration is still two separate interfaces with separate navigation, separate feature sets, and some redundancy in how they surface capabilities. The sync is bidirectional but the tools do not feel unified yet. You will switch contexts manually.

The Notebooks feature in Gemini launched in April 2025 and is currently limited to paid Google One AI Premium subscribers. If you are already in the Google ecosystem for work, the cost-benefit case is straightforward. If you are not, there is setup overhead to consider.

NotebookLM’s source-grounded discipline is powerful, but it means the system is only as good as the sources you give it. Garbage in, garbage out still applies, and perhaps more acutely than with general-purpose chat, because the tool will not spontaneously correct for weak source selection.

The MIT study itself, while widely cited, is a preprint with a sample size of 54 and a narrow task (essay writing). The findings are directionally credible and EEG-grounded, but generalising them to all AI use cases or all professional contexts warrants caution. The researchers were explicit that the study’s scope is limited and that further research is needed, particularly in software engineering and coding workflows.


The Bigger Question: What Are You Optimising For?

The Gemini and NotebookLM system is compelling not primarily because of the technology, but because it forces a discipline question most AI users are not asking themselves explicitly.

If you are optimising for speed of output, there are faster approaches. Generic prompting in any major LLM will produce results faster than the phased workflow described above. But speed of output is a proxy metric. The actual question is: are you building expertise, or are you consuming it?

For senior practitioners, the cognitive debt problem is not hypothetical. It is already shaping how teams work. Engineers who have been in their discipline for ten or more years have deep pattern libraries built from years of hard problem solving. Routing all thinking through an LLM without engagement does not access that library. It bypasses it, and over time, it weakens it.

The AI generation that uses these tools to amplify existing deep expertise will compound their advantage. The generation that uses them to avoid acquiring that expertise in the first place will find the ceiling arrives much sooner than expected.

The difference is workflow discipline. Not tool choice.


The Takeaway

The MIT study is a signal worth taking seriously, not as an argument against AI adoption, but as a structural argument for how adoption is designed. The Gemini and NotebookLM system, used with the sequencing described above, is one of the most practically well-considered AI knowledge workflows currently available.

The key steps:

  • Think first. Frame your knowledge and uncertainties before opening any AI tool.
  • Use NotebookLM for disciplined, source-grounded comprehension and analysis.
  • Use Gemini for synthesis, extension, and cross-project reasoning.
  • Always lead with your own position. Use AI to challenge it, not generate it.
  • Test your ownership of the output before you ship it.

The engineers who build the strongest practice with AI will not be those who use it most. They will be those who have thought hardest about the boundary between amplification and substitution, and built their workflow to stay on the right side of it.


If you found this useful, my book Vibe Coding: Build Cloud Infrastructure at the Speed of Thought goes deeper into AI-augmented workflows for platform engineers. Details at blog.ogunlana.net.

What does your current AI workflow look like? Are you prompting first, or thinking first? Drop your answer in the comments.

Leave a Reply

Your email address will not be published. Required fields are marked *