Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

LLMs will hallucinate forever – here is what that means for your AI strategy

Published September 12, 2025 in Artificial Intelligence • 6 min read

OpenAI’s recently published paper marks the end of the ‘Oracle’ AI myth, but a 90-year-old mathematical paradox can teach businesses why this is good – and how to build an AI strategy for 2026 and beyond, argues José Parra Moyano.

Have you ever trusted a spreadsheet only to find a critical error buried in a formula? Now imagine that error could invent sources, fabricate data, and present it with unshakable confidence. This is the specter of AI “hallucination,” and a pivotal OpenAI paper confirms it is not a bug to be fixed, but a fundamental feature of how large language models operate.

“Why Large Language Models Hallucinate” does an excellent job explaining the technical how: these models are probabilistic storytellers, not databases; they predict the next most plausible word, not the absolute truth. But to build a truly resilient AI strategy, leaders must look beyond the engineering and understand the why. The answer lies not in computer science, but in a mathematical paradox that places a hard ceiling on what any intelligence, human or artificial, can ever achieve.

This is Gödel’s Incompleteness Theorems, and it has direct implications for the dream of a perfect, all-knowing AI.

Businessman making informed notes while planning strategy
Think of it like a company’s rulebook. You can have a perfectly consistent rulebook with no contradictions. But there will always be real-world scenarios

What business leaders can learn from Gödel’s paradox about AI

In 1931, the mathematician Kurt Gödel dropped a logic bomb on the world of mathematics. In simple terms, he proved that in any sufficiently complex system of rules, like a set of axioms for arithmetic – or the entire internet used to train an AI –there will always be true statements that cannot be proven from within that system. The system can be consistent, meaning free of contradictions, or it can be complete, meaning able to prove every true statement, but it can never be both.

Think of it like a company’s rulebook. You can have a perfectly consistent rulebook with no contradictions. But there will always be real-world scenarios, such as a disruptive new technology or an unprecedented market event, that the rulebook does not cover and cannot adjudicate. To handle it, you must go outside the rules, using human judgment, creativity, or ethics. The rulebook is consistent but incomplete.

Alternatively, you could write a rulebook so exhaustive that it tries to dictate a response for every conceivable situation. But in doing so, you would inevitably create contradictions where one rule conflicts with another. This rulebook would be complete but inconsistent.

Gödel proved that you cannot win. This is not a limitation of our current knowledge; it is a law of logical (self-referential) systems.

“An AI, operating purely within its training data, is like a manager who refuses to think outside the company manual.”

The AI’s inescapable rulebook

Now let’s apply this to your AI. Its rulebook is the vast dataset it was trained on. It has ingested a significant portion of human knowledge, but that knowledge is itself a finite, inconsistent, and incomplete system. It contains contradictions, falsehoods, and, most importantly, gaps.

An AI, operating purely within its training data, is like a manager who refuses to think outside the company manual. When faced with a query that falls into one of Gödel’s gaps – a question where the answer is true but not provable from its data – the AI does not have the human capacity to say, “I do not know,” or to seek entirely new information. Its core programming is to respond. So, it does what the OpenAI paper describes: it auto-completes, or hallucinates. It creates a plausible-sounding reality based on the patterns in its data.

The AI invents a financial figure because the pattern suggests a number should be there. It cites a non-existent regulatory case because the pattern of legal language is persuasive. It designs a product feature that is physically impossible because the training data contains both engineering truths and science fiction.

The AI’s hallucination is not simply a technical failure; it is a Gödelian inevitability. It is the system’s attempt to be complete, which forces it to become inconsistent, unless the system says, “I don’t know,” in which case the system would be consistent but incomplete. Interestingly. OpenAI’s latest model has a feature billed as an improvement – namely its “abstention rate” (the rate at which the model admits that it cannot provide an answer). This rate has gone from about 1% in previous models to over 50% in GPT-5.

Stop seeing the AI as the junior analyst; instead, see it as a designated dissenter, a system engineered to challenge human complacency by generating alternative strategies, probing logical vulnerabilities, and simulating competitor moves with unnerving speed.

Implications for business leaders: Strategy in an age of incomplete intelligence

This does not mean that businesses should abandon AI, but rather that they must deploy it with pragmatism and a clear-eyed understanding of its limits. To view AI as either a savior or a pariah is to miss the point entirely. The conversation must pivot from adoption to orchestration. The inherent nature of these systems, as underscored by both OpenAI’s research and Gödel’s cold logic, is not one of eventual infallibility but of constitutional weakness. Hallucination isn’t a bug; it’s in the nature of language models (the ones used by humans, and the ones used by machines). This truth forces a shift in strategy: the ambition is no longer to build an oracle, but to engineer a world-class provocateur.

The executive’s role is to navigate the dissonance and to feel comfortable in the uncertainty of the outcomes suggested by the LLMs. Stop seeing the AI as the junior analyst; instead, see it as a designated dissenter, a system engineered to challenge human complacency by generating alternative strategies, probing logical vulnerabilities, and simulating competitor moves with unnerving speed. Instead of seeing hallucinations as the price we pay for the genius of AI, we should build on them to get better ideas and become aware of our blind spots.

The most robust organizations will be those that institutionalize contradiction, building cultures where every AI-generated insight must survive a gauntlet of human skepticism. The future belongs not to those with the most powerful AI, but to those with the most sophisticated human-AI symbiosis. They will succeed not because their technology is perfect, but because their people are well-equipped to leverage the imperfections of the LLMs. They won’t be consulting an oracle; they will be sparring with a probabilistic machine – and that is something far more powerful.

Authors

José Parra-Moyano

José Parra Moyano

Professor of Digital Strategy

José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience