Historicism: Why AI Cannot Predict the Future

Executives are using AI to predict market trends. Karl Popper's theory of "Historicism" explains why relying on historical data to forecast human behavior is a fatal strategic trap.
Historicism: Why AI Cannot Predict the Future

Every decade, the corporate world finds a new oracle. In the 1990s, it was the management consultant with a proprietary matrix. In the 2010s, it was the "Big Data" algorithm. Today, it is Generative AI.

In boardrooms around the world, executives are feeding massive datasets into AI models and asking them the ultimate question: "What is going to happen next?" They want the AI to predict market crashes, consumer trends, and the inevitable outcome of their 5-year strategic plans.

They believe that because the AI has ingested all of human history, it must know the future.

Karl Popper warned us about this exact dangerous illusion in 1957. He called it Historicism, the belief that history develops inexorably according to discoverable laws, and that if we just have enough data, we can predict the destiny of human society.

For the Chief Wise Officer, relying on AI as a historicist oracle is not just lazy leadership; it is a fatal strategic error.

The Trap of Historicism

Karl Popper wrote The Poverty of Historicism to dismantle the totalitarian ideologies of his time, like Marxism and Fascism, which claimed to know the "inevitable" direction of history.

Popper proved that predicting the future of human society is logically impossible. His proof was incredibly simple:

  1. The course of human history is strongly influenced by the growth of human knowledge.
  2. We cannot predict, by rational or scientific methods, the future growth of our scientific knowledge. (If you could predict a future discovery today, it would be a present discovery, not a future one).
  3. Therefore, we cannot predict the future course of human history.

When an executive asks an AI to predict the state of the market in three years, they are committing the sin of Historicism. The AI only knows the past. It is a sophisticated autocomplete engine trained on historical data. It cannot predict the fundamentally new ideas, Black Swan events, or sudden shifts in human knowledge that will actually dictate the market in three years.

The Mirror of Yesterday

Why do AI's predictions often look so incredibly convincing?

Because the algorithm is a master of pattern recognition. If you ask it what the future of e-commerce looks like, it will synthesize the last ten years of e-commerce data and extrapolate a straight line forward. It will write a strategic plan that sounds perfectly logical, using confident executive jargon.

But extrapolation is not prediction. Extrapolation assumes that the rules of the game will not change.

If you had fed all the data in the world into a supercomputer in the year 1990 and asked it to predict the future of human communication, it would have predicted faster fax machines and cheaper long-distance phone calls. It could not have predicted the iPhone, because the iPhone required a leap in human knowledge that did not yet exist in the dataset.

AI is holding up a highly polished mirror to yesterday and calling it tomorrow.

The CWO Strategy: AI as a Synthesizer, Not a Seer

The Chief Wise Officer does not ban AI from the boardroom. Instead, they strictly demarcate its use. You must strip the AI of its historicist power and utilize it for its actual strength: synthesizing the present.

1. Ban Predictive Strategy Prompts Never ask an AI, "What will our customers want in 2028?" It will hallucinate a highly plausible, dangerous lie. Instead, ask it to synthesize massive amounts of current, unstructured data. "Summarize the core complaints from our last 10,000 customer service transcripts and identify the three most common structural failures." Use AI to see the present clearly, not to guess the future.

2. Use AI to Hunt for Falsification Since humans suffer from Confirmation Bias (we only look for data that proves our strategy right), use the AI as a mechanized Murder Board. Feed your proposed strategy into the LLM and prompt it: "Act as a hostile competitor. Ignore all confirming data. Give me five logically sound reasons why this strategy will bankrupt my company in 12 months." You are using the AI's immense processing power to stress-test your ideas, rather than validate them.

3. Protect the Human "Unknown" The most valuable asset in your company is not the historical data you have collected; it is the unpredictable, irrational, creative leaps your employees will make tomorrow. Historicism assumes humans are just variables in a mathematical equation. Strategy is the act of inventing a future that no historical data could have predicted.

Conclusion: The Future is Unwritten

If the future could be perfectly predicted by an algorithm analyzing the past, business would just be a math equation, and leadership would be obsolete.

Do not surrender your strategic agency to a server farm. AI can tell you exactly where you have been with breathtaking clarity. But the future is inherently unknowable. It is not something to be predicted; it is something you must actively create.

Subscribe to my newsletter

No spam, no sharing to third party. Only you and me.

Member discussion