Categories:

I recently attended an interesting online talk by Ilya Gogin (Pearson’s Product Management Director). At the begining of his talk he discussed cognitive offloading, a term I had not really taken account of before. Gogin used the example of somebody driving with a GPS navigation system, repeating the same route over and over again, but still dependent on the device and not able to create a mental cognitive map of those all too familiar roads. This is a perfect example of offloading cognitive tasks and the associated negative repercussions. So, let’s define “cognitive offloading” as the act of delegating mental tasks to tools or devices. As we know, humans have long utilized external memory aids such as notebooks, calendars, and calculators, so this practice is not entirely new. In fact, using Google to look up research papers for this article was a form of cognitive offloading.

It was John Sweller who developed the cognitive load theory. He proposed the theory that the human cognitive system has limited capacity. He suggested that by reducing cognitive load, we can enhance learning and performance. This idea makes cognitive outsourcing an attractive way of freeing up cognitive resources for “more interesting” things. We delegate tasks that put a strain on our mental capacity. It is a trade-off because, in order to do that, there is a cost of not engaging in these tasks. That can make us more stupid! It is like taking the elevator every day in order to maximize the time at your desk, only to find out that your leg muscles have decreased.

So what is AI going to do to the human brain if we use it the wrong way? The Center for Strategic Corporate Foresight and Sustainability at the Swiss Business School (SBS) looked into this question in detail. They carried out surveys with 666 participants and in-depth interviews with 50 individuals. The goal was to explore the relationship between how often people use AI tools and their critical thinking abilities. The key findings revealed “a significant negative correlation between frequent AI tool usage and critical thinking abilities.” It also revealed that “younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” The study raises concerns about a workforce that is skilled at using AI to solve tasks but is essentially less equipped to handle novel situations that require original thought. It is like AI becomes the analyst, and we just become the implementors.

The study used real research and data to reveal a shocking trend: the more people use AI tools, the lower their scores tend to be on critical thinking. This implies that there is an inverse relationship there, so the more you use AI, the lower your critical thinking skills will be. The author of the study defines critical thinking “as the ability to analyze, evaluate, and synthesize information to make reasoned decisions.” He goes on to point out that these are fundamental cognitive skills essential for academic and professional success. Without effective engagement in cognitive processes like reflexive thinking, decision-making, and problem-solving, we risk losing these vital skills needed for the modern workplace.

In another survey of knowledge workers carried out by Carnegie Mellon University and Microsoft Research, they also looked at how generative AI tools effect critical thinking. The researchers recruited 319 knowledge workers who self-reported using GenAI tools at work at least once per week. Participants were asked to share a total of 936 real-world examples of tasks for which they used GenAI and how critical thinking played a role in these tasks. The study highlighted a shift in cognitive effort from task execution to oversight. They found that while generative AI tools reduce the perceived effort of critical thinking, most offen when users have high confidence in AI, this can lead to less critical engagement with tasks and potential over-reliance. The research revealed a shift in cognitive effort: from information gathering to verification, from problem-solving to AI response integration, and from task execution to task stewardship.

There are so many examples of the problem of cognitive offloading in our fast changing digital world. AI translation tools such as Google Translate may even eliminate the need to learn words and phrases in a foreign language. This wonderful convenience may not only put translators out of business, but also result in us all learning fewer foreign languages. AI-assisted search engines and virtual assistants provide instant access to facts and answers. Instead of recalling information or performing mental arithmetic, we can simply ask ChatGPT to provide what we want to know. This shifts the responsibility of recalling information and solving problems to the ever present external digital systems in our phones and computers. In addition, people constantly rely on digital recommendation systems to process data and suggest optimal choices. AI feeds us will with endless steams of content, all designed to save us the mental effort of searching for it. Spotify, Netflix and co are constantly feeding us new recommendations that is based in all our user data. In workplaces, professionals use AI decision-support systems to analyze complex data or recommend actions. Automated assistants can process information much more quickly than the human brain, helping to make decisions in many complex areas of human endevour. While this enhances efficiency and consistency, it also means that humans engage less in critical analysis themselves. These are all good examples of our offloading our cognitive abilities to machines.

The advantage of using AI is obvious. Firstly, it allows you to handle mundane tasks so you can dedicate your cognitive effort toward more meaningful, creative, or strategic tasks. Using AI, for example, makes you more efficient and effective in any particular moment. This gives you some superpowers at work as you can rapidly scale up your contributions. However, advantages come with caveats. Over-reliance on external storage of information can lead to forgetting and a loss of critical thought. The first study we mentioned earlier showed that heavy users of AI, especially younger users, scored lower on tests of evaluating arguments and problem-solving. This suggests that habitual offloading of thinking to AI can make people less practiced in those skills. This seems to be a bigger danger for younger generations growing up with this technology. Some psychologists describe a tendency toward cognitive laziness when we become too comfortable offloading every challenge to technology. We stop questioning information and begin to place all our trust in AI output without critical evaluation. This phenomenon is known as automation bias. This can have implications for a society which is increasingly dominated by misinformation and fake news.

What happens when technology is absent or fails? What happens to our AI superpowers? How can we mitigate against this risk? I think that AI should augment human cognition, not replace it. By being aware of cognitive offloading, we can make conscious choices about when to rely on AI and when to use our own mental muscles. We need to employ the strengths of AI without using it to replace human reasoning and creativity.

Image by Wolfgang Gerth from Pixabay

Gogin, Ilya. “Help, I’m competing against AI: Human vs artificial intelligence in language teaching” (https://www.youtube.com/watch?v=NtlurJgS86s&t=2531s&ab_channel=PearsonLanguages)

Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies 15, no. 1 (2025) (https://www.mdpi.com/2075-4698/15/1/6/pdf?version=1735907439).

Lee, Hao-Ping (Hank), Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.” In CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26–May 01, 2025, Yokohama, Japan, 23 pages. New York, NY, USA: ACM. (https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf)

Tags:

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *