How Nvidia created the chip powering the generative AI boom Financial Times
Some snippets from FT piece here on software approach cost structure and user base.
Nvidia originally focussed on software for chip development supporting gaming GPU. This was a prescient shift.
Nvidia now has more software engineers than hardware engineers to enable it to support the many different kinds of AI frameworks that have emerged in the subsequent years and make its chips more efficient at the statistical computation needed to train AI models.
Hopper was the first architecture optimised for “transformers”, the approach to AI that underpins OpenAI’s “generative pre-trained transformer” chatbot. Nvidia’s close work with AI researchers allowed it to spot the emergence of the transformer in 2017 and start tuning its software accordingly.
“Nvidia arguably saw the future before everyone else with their pivot into making GPUs programmable,” said Nathan Benaich, general partner at Air Street Capital, an investor in AI start-ups. “It spotted an opportunity and bet big and consistently outpaced its competitors.”
Costs are high and so long as the competitive advantage persists this means strong revenue for Nvidia.
Huang’s confidence on continued gains stems in part from being able to work with chip manufacturer TSMC to scale up H100 production to satisfy exploding demand from cloud providers such as Microsoft, Amazon and Google, internet groups such as Meta and corporate customers.
“This is among the most scarce engineering resources on the planet,” said Brannin McBee, chief strategy officer and founder of CoreWeave, an AI-focused cloud infrastructure start-up that was one of the first to receive H100 shipments earlier this year.
Some customers have waited up to six months to get hold of the thousands of H100 chips that they want to train their vast data models. AI start-ups had expressed concerns that H100s would be in short supply at just the moment demand was taking off.
Elon Musk, who has bought thousands of Nvidia chips for his new AI start-up X.ai, said at a Wall Street Journal event this week that at present the GPUs (graphics processing units) “are considerably harder to get than drugs”, joking that was “not really a high bar in San Francisco”.
“The cost of compute has gotten astronomical,” added Musk. “The minimum ante has got to be $250mn of server hardware to build generative AI systems.”
AI big tech and start ups using H100 Nvidia chip
The H100 is proving particularly popular with Big Tech companies such as Microsoft and Amazon, who are building entire data centres centred on AI workloads, and generative-AI start-ups such as OpenAI, Anthropic, Stability AI and Inflection AI because it promises higher performance that can accelerate product launches or reduce training costs over time.
Competition
- Nvidia
- TMSC
- ASML
- Advantest
- Tokyo Electron
Tags #AI #nvidia #chip-manufacturing-model #H100-chip #competitive-differentiator