Matching capital spending with revenue


Several trillion dollars will be spent on data centers over the next few years and Nvidia is the primary beneficiary of that spend. This has been apparent for about two years. What’s changed over the last few months is the mix of what those data centers will do. 

Skipping ahead to the conclusion: 

The shift is from mostly building large language models (LLMs) to mostly using large language models. This has much clearer revenue implications for Nvidia’s customers and will justify this capital spending with more predictable returns on invested capital.

Since the release of ChatGPT in late 2022, Nvidia GPUs (Graphics Processing Units) have seen insatiable demand as several companies have built large “frontier models.” A significant amount of the computation done by Nvidia hardware is for training these models—basically, getting the models to learn from very large amounts of text. 

Hundreds of billions of dollars have been spent building the infrastructure for training LLMs. But Wall Street has been skeptical of this spending because it’s become clear that these models are essentially commodities without much inherent value. Building and training a model doesn’t directly generate revenue, so how could it earn an adequate return on capital? 

This concern is right, but is missing a larger point. Like the movie Field of Dreams, “if you build it, they will come” is how we think this must play out. In other words, you need to build a model first, then the revenue comes from getting customers to use the model. The technical term for using the model is called inferencing—getting the model to generate an output. Inference is where companies can generate revenue. 

Recently, newer models have begun doing multistep “reasoning” instead of giving a “one shot” answer. Think of it this way—prior to reasoning, these models would essentially answer like they’re taking a pop-quiz—no studying, just come up with the first answer that comes to mind. And just like a student taking a pop quiz, that can be hit or miss. 

Whereas when teachers let students take an “open book” test, each student gets to read some source material, organize their thoughts, and come to a better and more complete answer. Preparing and thinking for longer gives you better answers. 

In the context of LLMs, “thinking” means much more inference, sometimes 10x or 100x as much computation, which means more revenue for the companies hosting these models. Wall Street has gotten it wrong over the last few months–shifting towards more inference means much more demand for computing, not less. 

With the advent of Deepseek R1 and other reasoning models, we have begun a step change in the demand for AI computing. The largest and most efficient computing systems are more valuable than they were just a few months ago. The leader in building these AI computing factories and its many large customers, will see the benefits with profitable revenue growth and returns on invested capital.

 

Best regards,

Evan McGoff

 

Disclosure: Dock Street Asset Management, Inc. and/or our clients may own Nvidia (NVDA). This article is not intended to be used as investment advice.

Dock Street Asset Management, Inc. is an investment adviser registered with the U.S. Securities and Exchange Commission. You should not assume that any discussion or information contained in this letter serves as the receipt of, or as a substitute for, personalized investment advice from Dock Street Asset Management, Inc.

It is published solely for informational purposes and is not to be construed as a solicitation nor does it constitute advice, investment or otherwise.

To the extent that a reader has questions regarding the applicability of any specific issue discussed above to their individual situation, they are encouraged to consult with the professional advisor of their choosing.

A copy of our Form ADV Part II regarding our advisory services and fees is available upon request.

Our comments are an expression of opinion. While we believe our statements to be true, they always depend on the reliability of our own credible sources. Past performance is no guarantee of future returns.