Our LLMs Can't Reason but That Doesn't Mean They Can't Simulate It - Yµn ^…^ ƒ(x)
Our LLMs Can't Reason but That Doesn't Mean They Can't Simulate It

Our LLMs Can't Reason but That Doesn't Mean They Can't Simulate It

Posted on April 12, 2025 by Yµn ^…^ ƒ(x) aka. Yunus Emre Vurgun
Last updated: April 12, 2025

The moment I realized a modern LLM or AI Agent marketed as a reasoning model can not really reason even if the title says "reasoning model", this changed my whole perspective and methodology of using them entirely and now I use them much more effectively and they are actually REALLY useful if you know what you are doing. 


I know AI companies claim to have "reasoning models" without giving us further context because this is marketing and you can't expect them to tell you it is a giant algorithmic process as you would get bored quickly as a general user and this would easily confuse you, but just know that under the hood, this "reasoning" actually means it is feeding extra system prompts and internal prompt generation which results in a chain-of-thought mechanism that keeps talking to itself again and again until it is satisfied with the outcome of the talk it is making with itself.


 This doesn't mean it is useless or not impressive or you should belittle the bot. You really should be IMPRESSED! It is a brilliant idea what they are doing. It just means it works way different than how you would expect them to be. If as a general user you can understand this difference, you will gain tremendous amount of productivity from these huge LLM tools and even be shocked how "smart" they can act when you guide them correctly.


 This is important for a general user for a couple of reasons as you MUST know: 


1. Don't expect to be understood by the bot because you told the details to it some time ago in the same chat. 


2. Don't assume it knows your intention because it seemed to understand your goal 5 minutes ago. 


3. Always remind the bot of the task every 3-4 prompts as it will re-generate it's "reasoning" context with more focus on the main goal. 


4. If your instructions are persistent and still following a pattern such as "Our goal is still XYZ, you are doing a good job except ABC and you should focus on them since we are doing XYZ". 


5. The chat session can get messed up without a notice and you can find yourself getting lectured on microbiology even though you are an automation engineer talking about production lines or worse if you were talking about making the ultimate Italian pasta. 


6. Always follow the flow of the session, try your best to act like a robot. This means, instead of trying like "Hey I told you, c'mon!! Why don't you understand me????", 


you should prompt: 


"Hi [LLM], we’ve hit a wall, but you were on track earlier. Let’s refocus on our goal of [XYZ]. 

You correctly handled [AB], but [C] is incomplete because [specific issue].

 Please think step-by-step to fix C before moving to [D]. 

First, outline your plan to address C and achieve XYZ. Here’s the structure: 

# Fix [specific issue in C]. 

# Verify C aligns with [AB]. 

# Propose next steps toward [XYZ]. 

# Stop after finishing [D]. 

Now [LLM], share your plan with me, and I’ll provide feedback before we proceed."


 It is worth nothing that not all LLMs work the same and not all of them will process your inputs the way another would. ç LLMs may better understand with structured inputs such as putting symbols to distinguish sections. 


One example I can give from experience is something like this: 


“Hey [LLM], I am giving you a structured step-by-step guide for you to follow. Please carefully respond in the wanted format:

 <instruction> 

Your goal is to respond me with a short text in japanese. 

<context>

 You will write about the landscape of Tokyo.

 <context_detail>

 Focus on why the sunset in Tokyo is the way it is regarding the landscape.

 <context_detail>

 </context> 

</instruction> 

The above instruction is important. Now respond me.”


 One can say this is too much detail and a smart LLM can already understand Tokyo and the task’s inner workings. I argue against that as most of the time a task has more depth than just a landscape prompt.


 Your XYZ goal can be something as scary as building a database structure proposal that will not violate certain global security and engineering standards meanwhile mimicing a certain custom structure that is hard to explain but can only be given step by step in a long session. 


When you look at it this way, a structured approach to LLM prompting becomes a smart move and not over-engineering things.