My Goldilocks Rule on AI in Intellectual Work.

Posted by Verarius
13-10-2025

A couple of weeks ago we talked about an AI Kit for PhD. This was a lengthy answer to the question “How much does AI actually help you with the workload?” This question sparked yet another one with a bat of an eye: “And how much of your thesis will be written by ChatGPT?” So let's look closer at this: where is AI your closest confidant and where should you refuse its help and run in the opposite direction, no matter how nice and hard it is trying?

There are a few tasks where I find AI tools like ChatGPT or Claude indispensable. AI can be very helpful in finding Swiss (Tilsiter) cheese holes in my logic and argumentation. My favourite new sport (and form of procrastination) is to take the feedback from one tool and feed it to another, asking what the other one missed or got wrong. For me, this is the first level of feedback that sometimes makes sense and saves time for me and my supervisor. Some of the AI feedback will be spot on; some will make sense only from a different perspective, showing me that I didn’t make myself clear enough. Sometimes it forces me to think in a different direction and pushes me toward a new idea.

Then there comes the topic of text polishing. Even if MS Word can check my spelling and spot obvious mistakes, when it comes to more complex grammar issues, I ask for help. So, summing up – I use AI as a first-level support in terms of correction and sparring, as well as for tasks that are not taxing or intellectually defining.

Now let’s flip that coin – what is a no-go and why? For me, the red line is where it’s no longer about feedback and challenging my thinking, but about taking the creative work away from me. Some of the reasons are obvious, and the first one is ethics. Another reason lies in the very nature of intellectual work: I find it intrinsically motivating, and I like that shot of (expensive) dopamine when I manage to formulate an idea, when I have my little private a-ha! moment. Overcoming this type of challenge is incredibly rewarding in itself. Even if you find it debatable or call me a masochist, I think we can agree that brain work helps us become more clever, smarter, and intellectually stronger. All in all, it provides a gym for that little hamster in our heads to keep its fur sleek, maintain its muscle mass, and live longer – because it’s the very activity that keeps it going.

Even if certain things are done to achieve tangible goals – be it the next promotion or a pay raise – there’s an expectation that by gaining experience we’re becoming better thinkers and problem solvers. And of course, we hope that even if Alzheimer’s does pay us a visit one day, we can at least postpone that gruesome moment by staying intellectually fit. Hence, it would be a darn bad idea to give this experience and this ammo away. For here comes the crux: by offloading the challenge, we might be solving a short-term problem or, to be more precise, jumping the queue (submitting a concept on time), but we’re creating a much bigger problem for our future selves – the ones who might not even remember our name in 20 or 30 years. That, and making Idiocracy less of a dystopian satire and more of a reality. This is why I always ask not to do a complete rewrite for me, but to give me feedback.

To stress the point: there is a captivating (albeit still nascent) research field studying how AI tools affect memory, cognition, and learning, often framed through the lens of my favourite new term – cognitive offloading. One recent study, with a telling title, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” illustrates this well. As the name suggests, participants were asked to write an essay. They were divided into three groups:

  • LLM / AI group – wrote essays with the assistance of ChatGPT or a similar model
  • Search engine / web group – used Internet search for support
  • Brain-only group – wrote essays without any external aid, relying solely on their own memory and knowledge.

Unsurprisingly, the groups’ ability to recall their own output varied dramatically. The authors describe a kind of “cognitive debt” that accumulates when one offloads too much to AI – over time, reliance may weaken internal cognitive processes.

So for me, when interacting with ChatGPT, Claude or the like, it boils down to a simple question: will this prompt make me think harder and deeper, or will this prompt take my thinking away?

Related Blogs

Posted by Verarius | 22.12.2023
Is there a better alternative than just SMART goals to assure you are on the best path to making next year count? The answer is “definitely yes”! Today we will investigate together how thinking “Process” can help you define really SMART goals and what type of monitoring will be the most useful for this purpose....
Read more
Posted by Verarius | 23.06.2023
Managerial Zoo: In this section, we will be looking at all sorts of amazing animals that populate the reality of a project and change manager. Don’t be surprised if you have never heard of some of them before – indeed, most of them have never walked the Earth. This is why giving them a place to live is so crucial. And where else when not in this cozy blog? So, without further ado – please meet the Giraphant!...
Read more
Posted by Verarius | 08.12.2023
Did you look at the calendar and suddenly realize that not only it is December already, but it’s also this time of the year again when you have to prepare for annual discussions and set goals for the next one, for you and your team? And all that right amidst the impending Christmas-carousel and the end of the year madness… Let’s have a chat on SMART-goals today and in the next article we’ll see how thinking “Process” can help you out…...
Read more