Holding two ideas in tension

We are living in a world of deep uncertainty and unprecedented technological change: new startups, new breakthroughs, new job displacements, and new standards challenging what was once believed to be possible.

One of the hardest things I have tried to wrap my head around is how to think about this AI moment and the implied forces behind forecasts and statements by leading entrepreneurs and investors. I believe a lot in technology, but I also believe in physics, gravity, and learning from history.

I have heard a version of the argument that, in power-law domains, the game is won in the right tail of the distribution. The mindset is not to focus on what can go wrong, but on what happens if it goes right.

And yet, if there is one thing my finance background and my MBA have trained me to do, it is to criticize assumptions and frame posterior expectations based on an initial set of beliefs, then update those beliefs as new information arrives. The goal is not to be cynical. The goal is to hold beliefs lightly and keep them open to revision.

The uncomfortable base-rate problem

In the excellent “Bayes and Base Rates” piece, Michael Mauboussin frames the challenge through two marquee examples: OpenAI and Oracle.

“In the fall of 2025, OpenAI projected revenue of $145 billion in 2029. The company’s sales in 2024 were $3.7 billion. That reflects a 5-year compound annual growth rate of 108 percent. Based on a sample of nearly 18,900 firm-period observations for U.S. public companies from 1950 to 2024, no public company has grown this fast for five years in the last three-quarters of a century. The results include all industries. The average compound annual growth rate is 7.0 percent, and the standard deviation is 10.6 percent. The forecast implies a roughly 9.5 standard deviation outcome for OpenAI under a normal approximation, which is extraordinarily unlikely. The math of Bayes’ Theorem does not work if the initial belief is based on an outcome with a probability of zero.”

This is the part that matters. If we are serious about uncertainty, we do not get to start from “this will happen” just because the story is compelling. The report forces a brutal question: if the implied growth is literally unobserved in the historical reference class, what probability should we assign before we see evidence? And how do we avoid the cognitive trap of treating a probability-zero event as either impossible or inevitable?

A separate but related tension sits underneath this: even if AI is a true regime shift, it does not automatically follow that one firm will capture the majority of the economics on a compressed timeline. A regime shift can be real and still produce outcomes that are widely distributed, heavily competed away, or bottlenecked by constraints.

This is not “software scaling”

It is tempting to think of OpenAI like a pure software company: viral adoption, marginal costs collapsing, distribution compounding. But the reality is messier.

AI scale requires massive investment in GPUs and AI data centers, which combine specialized hardware, power infrastructure, and cooling. This changes the nature of the growth problem. You are no longer just scaling code. You are scaling physical capacity under constraints, with timelines, execution risk, and supply bottlenecks.

OpenAI might not own all that infrastructure directly. It can rent compute, partner, and stay “asset-light” on its own balance sheet. But the system still has to be built somewhere. Asset-light at the corporate level can still be system-heavy at the ecosystem level. That is where base rates matter most.

Updating beliefs with evidence (and staying conservative)

If we take the base-rate starting point seriously, the next move is not to dismiss the forecast, but to ask: what evidence would justify shifting our beliefs upward, and what evidence should push them down?

A reasonable update framework looks like this.

Positive evidence

Adoption and diffusion. ChatGPT reached 100M users in “just 2 months,” faster than major historical analogs (TikTok, Instagram, Facebook, internet, phones). A conservative likelihood ratio range for revenue growth might be 1.5x to 3.0x. It is a strong demand signal, but conversion to paid revenue at scale is not guaranteed.

Near-term revenue trajectory. OpenAI “expects” about $13B sales in 2025, roughly 250% growth. A conservative likelihood ratio range might be 1.2x to 2.0x. It is evidence of early scaling, but still far from the 2029 level, and subject to deceleration.

Negative evidence

Financing feasibility. The report states OpenAI free cash flow is -$9B in 2025 and expected -$17B in 2026, implying heavy external financing needs. It also cites very high stock-based compensation, estimated above 45% of sales. A conservative likelihood ratio range might be 0.25x to 0.60x. Capital access is necessary, and inherently fragile.

Infrastructure bottlenecks. Large-project base rates are poor, and AI buildouts face constraints from power, hardware, supply chains, and delivery timelines. A conservative likelihood ratio range might be 0.40x to 0.80x.

These likelihood ratios are not measured. They are deliberately conservative placeholders to force explicitness and avoid narrative drift. None of this “proves” anything. It just enforces discipline. You update, you do not declare.

Fragility: what must be true for the forecast not to break

Even holding demand constant is insufficient. The forecast breaks if any of the following fail:

Compute economics. If inference marginal cost does not fall fast enough, or pricing compresses, revenue must grow even faster in volume terms.

Capacity buildout. Power availability and data-center delivery timelines become binding constraints. This is big-project risk, not just execution in a spreadsheet.

Capital markets. Continued willingness of investors to fund large cash burn and absorb dilution via SBC is not a law of nature.

Competition. If incumbents and other labs commoditize models or capture enterprise distribution, the market-share requirement becomes infeasible.

Market structure and value capture. At that scale the question is not “is AI big?” but “who captures the value, and how much accrues to one firm versus being competed away or commoditized in lower layers?”

This is the core tension: these forecasts often treat unit economics and adoption as if unconstrained, while the binding constraints are inputs the companies do not directly control, including capex, infrastructure delivery, and regulation. In AI and cloud infrastructure, the crucial constraint is the big-project base rate. Even if demand is huge, execution becomes a probabilistic bottleneck: power access, specialized hardware, supply chain, cooling, timelines.

The “priors are broken” argument, and why it is still hard

I could be completely wrong. Two mechanisms could justify meaningfully different base rates:

Radical cost declines. If inference costs fall faster than price compression, the capex constraint loosens materially.

Distribution and monetization innovations. If adoption reliably converts into paid usage at scale, and retention holds, the revenue ramp can look historically anomalous.

But even these are hard to defend cleanly. In the last Dwarkesh podcast with Dario Amodei, CEO of Anthropic, Dario made the point about having to buy compute five years in advance, and how the magnitude of these numbers forces a different planning posture. He also suggested why he does not buy the idea of “$1 trillion of compute in 2030.”

Even if we grant the broader point that AI could become large enough to support companies at massive scale, the jump from today’s run rates to the implied end-states is still an arithmetic problem, a physical buildout problem, and a financing problem. The gap is not just a forecast error waiting to happen. It is a compounding feasibility problem that requires multiple conditions to hold simultaneously.

Where I land, for now

The more I think about it, the more questions I have. The story may be directionally right, but the implied trajectory asks for a sequence of miracles: cost curves, capacity delivery, capital availability, and competitive containment, all lining up under time pressure.

Maybe this is the rare regime shift that breaks priors. Or maybe it is a case study in base-rate neglect dressed up as inevitability. The only honest posture is to keep updating, while refusing to confuse a compelling narrative with a high probability.

At minimum, it is worth acknowledging that forecasts of this kind do not just predict the future. They can also shape it, by changing capital allocation, competitor behavior, and the willingness of the ecosystem to build toward a shared expectation. If the system collectively believes the upside case, it can become self-fulfilling through investment and coordination. If something goes badly wrong, expectations can flip, capital can retreat, and the same dynamics can become self-negating.

“Sometimes these visions can be self-fulfilling; at other times they can be self-negating. Self-fulfilling could be positive or negative. If everybody is open, the system stays open and free trade flows. Then we’re expecting that to keep going and it becomes self-fulfilling. But if something goes badly wrong, people then are expecting bad times, and then things get closed and shut down and that can become self-fulfilling. You can switch between those two regimes.” — W. Brian Arthur, “Placing Bets in a World of Uncertainty”