Nvidia’s upcoming generation of graphics processors for artificial intelligence, codenamed Blackwell, is set to be priced between $30,000 and $40,000 per unit, as revealed by CEO Jensen Huang during an interview with CNBC’s Jim Cramer on “Squawk on the Street.”
Huang highlighted the necessity of developing new technologies to bring Blackwell to fruition, showcasing a Blackwell chip and indicating that Nvidia allocated approximately $10 billion towards research and development.
The pricing structure indicates that Blackwell, poised to be highly sought-after for training and deploying AI software such as ChatGPT, will fall within a similar range as its forerunner, the H100 (also known as the Hopper), which was estimated by analysts to cost between $25,000 and $40,000 per chip. The Hopper, introduced in 2022, marked a significant price hike for Nvidia’s AI chips compared to previous generations.
During a subsequent conversation with CNBC’s Kristina Partsinevelos, Huang emphasized that the cost encompasses not just the chip itself but also factors like data center design and integration into other companies’ data centers.
Nvidia typically unveils a new generation of AI chips approximately every two years, with each iteration like Blackwell boasting improved speed and energy efficiency. Nvidia leverages the buzz surrounding these new releases to secure orders for its latest GPUs. Blackwell, in particular, merges two chips and boasts a larger physical footprint compared to its predecessor.
The adoption of Nvidia’s AI chips has fueled a threefold increase in quarterly sales for Nvidia since the AI surge began in late 2022, following the announcement of OpenAI’s ChatGPT. Many leading AI firms and developers have relied on Nvidia’s H100 for training their AI models in the past year, with Meta announcing plans to purchase hundreds of thousands of Nvidia H100 GPUs.
While Nvidia doesn’t disclose the list prices for its chips, they are available in various configurations, and the actual price paid by end consumers such as Meta or Microsoft depends on factors like the chip volume purchased and whether the purchase is made directly from Nvidia or through vendors like Dell, HP, or Supermicro, which build AI servers. Some servers are equipped with up to eight AI GPUs.
Recently, Nvidia unveiled at least three versions of the Blackwell AI accelerator—B100, B200, and GB200, the latter pairing two Blackwell GPUs with an Arm-based CPU. These versions feature slight differences in memory configurations and are expected to be released later this year.