My theory on AI (and I'm taking a stand)
After 2 years of thinking about AI, this is my theory about my future as a professional in the knowledge sector.
Most current AIs are based on language (LLMs) that have been trained to estimate conditional probability (p(token | context)).
During response inference, the token with maximum probability is not always chosen; normally sampling is done within the range of tokens with higher density using top‑p or certain temperature.
This controlled randomness allows for diversity, coherence and avoids repetitions. And while the model is not obligated to return THE most probable token, it DOES return a sufficiently probable token according to the decoding parameters and alignment policy.

So far there's nothing new... but we've already overlooked the most relevant part: what it delivers is "sufficiently probable".
And this is the KEY!
The distribution over an LLM's vocabulary
is discrete and heavily biased, meaning that
the further we move away from the probability peak,
the less frequent the sequences we obtain become.
And this is where the most important reflection I make is: In this area where AI "can do all my work", do I want the most probable or the excellent?
[Technical section you can skip if you're not interested: You might say that AI can get out of that most probable part by adjusting sampling parameters like temperature or top-p, and that's true, but raising the temperature doesn't mean the model tends towards the "excellent side", but rather towards both, which again makes it difficult to evaluate excellence if you don't have that level]
And of course we flip out when AI shows us/does things in which we are below that average (I've experienced this programming). We flip out when AI executes average actions (I've experienced this with CRO actions)
And mind you, I'm not saying it can't reach the upper extreme... but I see an intrinsic problem with how the model is built and it is: "How do you evaluate that something delivers its maximum value if it's designed to deliver something sufficiently probable?"
And even worse: How do you make excellence a consistent response?
So my point is as follows: Use AI when:
- The most probable is sufficient (i.e. writing an email without spelling mistakes) (small models)
- AI can give you support in the search (reasoning models)
Don't use it when:
- Details are important: Read the complete article if the details (learning them) seem relevant to you (to be above average)
- Write every word in your own hand, if the message has an extremely defined tone (yours)
- Architecture and design, if you know how you want to do it.
I was very afraid during the last 2 and a half years... but I'm increasingly optimistic about the human position.
And I won't say "AI won't replace you, someone who uses AI will".
I'll simply say: Stay on the human side.