When Sam Altman was forced out of OpenAI for that now-famous weekend in November 2023, employees revolted. Nearly the entire staff threatened to quit unless he returned. Altman went to X and posted his rallying cry, “OpenAI is nothing without its people.” It was not just a PR line. It was a fact, and the company’s survival depended on it.
Two years later, that statement is still true. It is also the blueprint for a much bigger problem that is unfolding in plain sight.
The Talent Arms Race Is Out of Control
Mark Zuckerberg’s Meta has been offering hundreds of millions, sometimes billions, in multi-year packages to poach AI researchers from OpenAI and other rivals. Google (Gemini), Microsoft (Copilot), and Anthropic (Claude) are all doing the same. Offers are bigger, equity packages are richer, and the ability to cash out those equity stakes quickly has become the most effective recruiting tool.
Liquidity, not just salary, is now the main battlefield. The companies that allow employees to realize their equity gains while valuations remain sky-high are winning the race for talent. In this market, cashing out early means locking in life-changing wealth before the bubble bursts.
Commoditization Is Already Happening
Here is the reality few in these bidding wars want to say out loud. Large language models are becoming commodities. Every major player has a capable model, and the differences are shrinking. The speed of iteration and access to resources still matter, but the initial “wow factor” that ChatGPT delivered in late 2022 has faded.
That means two things. First, talent is less about building the one groundbreaking model and more about extracting small, incremental gains from something competitors already have. Second, real differentiation will come from vertical integration, niche specialization, or proprietary data, not from simply hiring more PhDs to do the same work.
The Profit Problem and the Bubble
None of these models are profitable. Running them is expensive, and the infrastructure costs are staggering. Yet valuations and compensation packages act as if AI companies are printing money. This is classic bubble behavior: massive spending, zero profit, and an assumption that dominance today will pay off tomorrow.
The risk comes when tomorrow arrives. If large language models continue to converge in quality, the moat will shrink, margins will tighten, and the giants will have to explain why they spent billions chasing talent in a sector where the core product has become interchangeable.
OpenAI’s Million-Dollar Band-Aid
OpenAI’s recent multi-million-dollar retention bonuses for top staff illustrate how this cycle works. These payouts, combined with stock options that can be converted to cash sooner, are meant to keep employees from leaving. In the short term, this strategy works. In the long term, it fuels the inflationary cycle that is making the bubble bigger.
The Next 12 Months
If Altman’s original statement was true, that OpenAI is nothing without its people, then the reverse is also true: the people know it. In this environment, loyalty lasts only until a better offer comes along.
When the market corrects, we will see how much of today’s AI dominance is built on real competitive advantage and how much is the result of overpaid talent chasing liquidity.
Until then, the companies with the deepest pockets and the fastest path to equity cash-outs will continue winning the arms race. The bubble will keep inflating, while the LLMs that cannot keep up will either fail, be absorbed by companies that can continue the race, or specialize in an AI category such as search, code generation, or other focused verticals.
Frequently Asked Questions
Q: What did Sam Altman post on X during his removal from OpenAI?
A: During the weekend he was removed in November 2023, Altman posted, “OpenAI is nothing without its people.” This became the rallying cry for hundreds of employees who threatened to resign unless he was reinstated.
Q: Why is liquidity so important in the AI talent war?
A: Liquidity allows employees to cash out their stock options quickly, turning paper wealth into real money. In a high-valuation, high-risk environment, being able to secure those gains now is often more attractive than waiting for long-term vesting schedules.
Q: Are large language models really becoming commodities?
A: Yes. The gap in capability between models from OpenAI, Google, Meta, Anthropic, and others has narrowed. As performance converges, differentiation is shifting toward specialized applications and proprietary datasets rather than purely model quality.
Q: Why are none of the major LLM companies profitable?
A: The costs of training and running large models are extremely high, especially when serving millions of users. Infrastructure, compute, and energy expenses often exceed revenue, which is why even market leaders rely heavily on investor funding.
Q: What could happen if the AI bubble bursts?
A: Companies unable to maintain the capital and talent needed to compete may fail outright, be acquired by larger rivals, or pivot to narrow AI specialties like search, code generation, or healthcare-focused solutions.