Scientifically Speaking | AI is changing how we think but human mind isn’t static
AI can improve efficiency and might reduce mental effort in some areas. But that’s not the same as proving long-term cognitive decline.
If AI diminishes our thinking abilities, the consequences will be profound. After all, the last thing we need is to accidentally underthink ourselves out of our jobs.

Will artificial intelligence make us lose our ability to think critically? As millions of knowledge workers hand over routine tasks to AI chatbots, this question has moved from the realm of speculative fiction to reality. Some fear we're outsourcing our way to intellectual obsolescence, one ChatGPT prompt at a time. Recently, an article I read warned of "atrophied" minds and diminished cognitive abilities. How accurate are these concerns?
Critical thinking is fundamental to how we solve problems, make decisions, and innovate. And unlike routine tasks that can be automated, critical thinking represents a uniquely human capability that AI cannot fully replicate, at least, not yet. Though some would argue that with the emergence of reasoning models like ChatGPT’s o1 and o3 and DeepSeek’s R1, the line is beginning to blur. But are these models actually reasoning, or just simulating it more convincingly? It depends on who you ask (and how you define reasoning itself). It is still early days, and AI’s role in complex thinking remains an open debate.
The most recent panic seems to stem from a Microsoft Research and Carnegie Mellon study entitled "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers". But before we unplug our AI assistants in fear, let's look at what the research tells us. The researchers surveyed 319 knowledge workers across various professions, collecting 936 real-world examples of AI use. This study represents one of the first serious attempts to analyse whether AI is enhancing our cognitive abilities or merely replacing them.
Media coverage of this research has triggered alarm bells. One article, "Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills," warned that workers are becoming "atrophied and unprepared."
If you only read the headlines, you’d think we’re one ChatGPT conversation away from intellectual ruin. But the study doesn’t actually show that our critical thinking abilities are deteriorating. Instead, it reveals that AI is reshaping cognitive effort, particularly in routine or lower-stakes tasks.
The researchers identified three major changes in how we work with AI. People are moving away from gathering information themselves and instead focusing on verifying AI-generated content. Direct problem-solving is giving way to integrating AI-suggested solutions. And there’s a shift from doing the work to ensuring that the work is done right.
But rather than eliminating critical thinking, these shifts suggest that human cognition is evolving alongside AI. And this evolution isn’t automatic, it depends on whether workers actively engage with AI outputs or passively accept them.
Workers who placed high trust in AI tools tended to engage in less verification of AI outputs, which is no surprise. It’s like having a highly competent intern or human assistant: after a while, you stop double-checking their work. But workers more confident in their own abilities often engaged more deeply with AI’s suggestions, applying rigorous critical thinking to evaluate and refine the results. These workers treated AI more like a creative partner than an infallible oracle.
Yes, AI can improve efficiency, and yes, it might reduce mental effort in some areas. But that’s not the same as proving long-term cognitive decline.
The study’s design (a survey capturing self-reported experiences) gives us valuable insights into how workers perceive their interactions with AI but also has limitations. It captures a snapshot of current behaviours rather than measuring changes over time and relies on workers' own assessment of their cognitive engagement. That means we cannot conclude from this study alone whether AI use leads to a lasting decline in critical thinking.
Consider a legal researcher using AI to analyse case law. Previously, they might have spent hours reading through cases to find relevant precedents. Now, AI can handle the initial search, but the lawyer’s critical thinking is still essential in evaluating AI's selections, identifying nuanced distinctions between cases, and constructing novel legal arguments. The cognitive work hasn’t diminished, it has shifted to higher-order analysis.
A similar shift can be seen in scientific research where AI helps process vast datasets, but human researchers are still required to interpret patterns, validate hypotheses, and ensure ethical considerations are met. This shift is also visible in medicine, where AI can flag potential diagnoses in medical imaging, but doctors must still exercise clinical judgment, especially in ambiguous cases where patient history and symptoms matter.
There are historical parallels that can help us navigate the current situation. When calculators became widespread, many feared the end of mathematical thinking. (Some of us still remember the teachers’ warning, "You won’t always have a calculator in your pocket!" Spoiler alert: we do, except they are now smartphones.)
Despite initial concerns, calculators didn’t destroy cognitive ability but instead freed up mental resources for tackling other complex challenges. The same happened with spell-checking, GPS navigation, and countless other cognitive tools. Time is finite but the number of problems the human brain can tackle is infinite. Giving up a task doesn’t mean we’re giving up on thinking.
The relationship between human cognition and AI tools isn’t zero-sum either. Just as writing and mathematics augmented human memory and reasoning, AI has the potential to enhance rather than replace human cognitive capabilities. But AI lacks true understanding, intuition, and contextual judgment. It can reinforce biases present in training data. It generates plausible-sounding errors that require human oversight. And it struggles with complex ethical reasoning and moral decision-making.
AI is also shaping new forms of thinking. Workers are learning how to evaluate AI-generated outputs against human expertise, recognise when AI might introduce bias or miss crucial context, and identify the limits of AI’s reasoning. AI is training us to develop a kind of vigilance that requires us to act as informed curators rather than passive recipients of information.
To truly understand AI’s impact on human cognition, we need more research: long-term studies tracking cognitive abilities over time, investigations into different approaches to human-AI collaboration, and comparative analyses across professions to see how different industries are adapting.
So, is AI making us stupid? The question is too simplistic. It depends on how we choose to use AI. AI is changing how we think, work, and solve problems, but human intelligence isn’t static either. As we continue to study and shape AI’s role in knowledge work, we will better understand how intelligence, both artificial and human, evolves together.
Anirban Mahapatra is a scientist and author, most recently of the popular science book, When the Drugs Don’t Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.
