On Monday, NVIDIA (NVDA) CEO Jensen Huang spoke about the company's highly anticipated Blackwell graphics processing unit (GPU) at the company's annual GTC conference in San Jose, California.
Blackwell is the successor to Nvidia's already coveted H100 and H200 GPUs, which the company says are the world's most powerful chips. The H100 and H200 chips have become the go-to GPUs for AI applications, and over the past few quarters he has helped Nvidia's data center revenue soar.
In its most recent quarter alone, the company reported data center revenue of $18.4 billion. To put this sector's growth in perspective, Nvidia reported his annual revenue for all of 2022 to be his $27 billion.
“For 30 years, we have pursued accelerated computing with the goal of enabling transformative breakthroughs such as deep learning and AI,” Huang said in a statement.
“Generative AI is the defining technology of our time. Blackwell GPUs are the engine powering this new industrial revolution. By working with the world’s most dynamic companies, we are unlocking the potential of AI in every industry. We will make it happen.”
Like traditional Hopper GPUs, Blackwell GPUs are available as standalone GPUs, or two Blackwell GPUs can be combined with Nvidia's Grace central processing unit to create what's called the GB200 superchip.
According to the company, this setup delivers up to 30x performance improvements and up to 25x lower power consumption compared to Nvidia H100 GPUs for large language model inference workloads. Energy conservation is an important part of this story.
Nvidia customers such as Microsoft (MSFT), Amazon (AMZN), Google (GOOG, GOOGL), Meta (META), and Tesla (TSLA) are currently using or actively using their AI chips as replacements for Nvidia products. is being developed. . One reason for this is to avoid having to pay the tens of thousands of dollars that NVIDIA's chips are estimated to cost. But another reason is that Nvidia's chips are particularly power-hungry.
Nvidia is directly responding to customer concerns by talking about the energy savings with its Grace Blackwell superchip.
Nvidia says Amazon, Google, Microsoft and Oracle (ORCL) will be the first companies to start offering access to Blackwell chips through their cloud platforms.
In addition to the Blackwell and Grace Blackwell chips, Nvidia also debuted the DGX SuperPOD supercomputer system. The DGX SuperPOD consists of over eight of his DGX Grace Blackwell 200 (GB200) systems, which contain his 36 Grace Blackwell 200 superchips and operate as a single computer. Masu. Nvidia says a customer can scale up his SuperPOD to support tens of thousands of his GB200 superchips depending on his needs.
DGX SuperPOD also features a new water-cooled rack-scale architecture. This means the system is cooled using fluid that circulates through a series of pipes and radiators rather than direct fan-based air cooling, which is less efficient and can reduce efficiency. Energy intensive.
The Blackwell GPU and GB200 superchip will be Nvidia's new top-tier leader when it comes to AI training and inference, so it will be in high demand as soon as it hits the market.
But competitors AMD (AMD) and Intel (INTC) aren't sitting idly by either. They are almost certainly working on developing their own chips in hopes of eventually catching up with his Nvidia.
daniel howley I'm the technology editor at Yahoo Finance. He has been covering the technology industry since his 2011. You can follow him on Twitter. @Daniel Howley.
Click here for the latest technology news impacting the stock market.
Read the latest financial and business news from Yahoo Finance