AI Is Not Egalitarian
Revolutionary technologies do not share rewards equally. Avoid becoming part of the permanent underclass.
AI will change the world in more ways than we can imagine, and it will certainly increase the standard of living for all humans. However, the benefits will not be distributed evenly. Particularly in our generation, we will see an increasing discrepancy in wealth, life expectancy, and the quality of living between those that can leverage this technology, and those that cannot. The reasons for this are threefold: (1) AI is a lever, not a magic box, (2) state-of-the-art models are expensive to run, and (3) AI can never know a fact.
AI as a lever.
“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world” – Archimedes”
The skilled become excellent, the ambitious become remarkable, and the unprepared move into the underclass. New technologies are better thought of as levers rather than magic boxes that can yield fortuitous outcomes.
But a lever requires a fulcrum to rest on. In the digital age, that fulcrum is infrastructure – reliable power, high-speed internet, and hardware access. If you lack the fulcrum, the lever is useless. Without the requisite human capital (literacy, training, knowledge) and the necessary infrastructure, you will be left behind. Anything multiplied by zero is zero.
This has been true for every technological revolution, and it will continue to be so. For example, the boom of the internet minted a level of wealth that has never been seen before, and the wealthy few were those in the right place, at the right time, to leverage the technology. Gates, Musk, Jobs, and Ellison are all examples of individuals well-positioned to pull the lever on the world of bits. The level of wealth in San Francisco is now so absurd that, for an increasing few, the only logical thing to do is literally go to space. It is other-worldly levels of exorbitance. In essence, if you missed that first wave, you are now part of the excluded underclass.
I think you have roughly five years to avoid being in the permanent underclass.
AI is expensive
State-of-the-art models are expensive to use productively. The flagship subscriptions of major labs all cost around $200 per month. Nearly half of the world does not earn this in a month1. This is currently the largest barrier to entry for the promised productivity gains.
While there are cheaper subscription tiers (typically $20/month), these offer access to mediocre models and strict usage limits. We are essentially encoding the stratification. This is exactly like the divide between those who could afford the first MacBook and those who could not – think about the productivity delta there. The models will get cheaper to run as time goes on, but these price tiers will sustain for years. Whilst those that have the excess disposable income reap the benefits, the rest of the world is essentially standing still.
It’s important to note that even if the underlying models get commodified (i.e., a convergence of open source models to closed), inference will continue to be expensive at the upper end of performance. A good case-in-point is the recent release of Gemini 3, which evidently showed that the pre-training realm is still alive.
Models will continue to get larger, and you will need to run large models to compete in diminishing returns.
AI can never know a fact
A fact is a worldly truth – something derived from nature. We (humans) learn our facts from experience and science. Large Language Models (this generation of AI) do not learn facts in the same way.
This is best illustrated by an example. If an LLM was trained in 1900, it would have no conception of Special Relativity and would think that physics was solely Newtonian. This is because Einstein came up with Special Relativity in 1905, and LLMs do not generate their worldview based on truth, but rather by data consensus – i.e., maximizing their worldview to fit all data points available2. The training data – the corpus of human knowledge at the moment of an LLM’s training – is incredibly important. If something is not in the training data, it is not in the world model.
Now, consider that the vast majority of training data currently comes from the internet or is synthetic. This means that the proportionality of knowledge is Western/American. Additionally, it is important to note that RLHF (Reinforcement Learning from Human Feedback) plays a vital role in the models of today. This training is done by large American corporations to suit their use cases.
We are going to get what are essentially “client states” – nations that cannot develop, train, or infer their own models. They will use US or Chinese LLMs.
If you thought the cultural phenomenon of Hollywood was powerful, wait for the impact of LLM homogenized thought.
There really could be a sectioning of the world into US or Chinese spheres of influence in everything – spirit, culture, the lot.
Conclusion: The Divergence
AI requires three things: the skills to pull the lever, a stable fulcrum of infrastructure to act upon, and the capital to pay the admission fee. Without all three, you are not in the game.
You can think of this dynamic as a centrifugal force pushing the ‘haves’ and ‘have-nots’ further apart. We are heading toward a bifurcated world of “Augmented Super-Humans” and “Depreciated Labor.” The window to bridge this gap is closing.
Don’t find yourself on the wrong side.
https://www.worldbank.org/en/news/press-release/2024/10/15/ending-poverty-for-half-the-world-could-take-more-than-a-century\#:\~:text=The%20global%20goal%20of%20ending,1990%20due%20to%20population%20growth.
In this example, hold everything else constant: compute power, training set size, etc.

