Nvidia (NVDA) is the king of AI. The company's share of the global AI chip market is estimated to be between 70% and 90%. High-performance graphics processors, perfect for training and working with AI models, are in such high demand that they can be hard to come by.
In June, when the AI boom was in full swing, Nvidia's market capitalization crossed the $1 trillion mark. And on Friday, the company's stock hit an all-time high of $549.91.
It's not just hardware that's helping Nvidia stay ahead of its rivals. The company's Cuda software, which developers use to create its AI platform, is equally important to Nvidia's staying power.
“Software continues to be Nvidia's strategic moat,” explains Gartner Vice President Analyst Chirag Dekate. “These … turnkey experiences put Nvidia at the forefront of mindshare and adoption.”
Nvidia's lead didn't happen overnight. The company has been working on AI products for years, even as investors have questioned the move.
“To NVIDIA's credit, about 15 years ago we started working with universities to find new things that GPUs could do beyond gaming and visualization,” says Moor Insights & Strategy. CEO Patrick Moorhead explained.
“What NVIDIA is doing is helping create a market, which puts competitors in a very tough spot, because by the time they catch up, NVIDIA has moved on to the next new thing. ,” he added.
But threats to Nvidia's governance are growing. Rivals Intel (INTC) and his AMD (AMD) are joining forces to grab their own slice of the AI pie. AMD debuted its MI300 accelerator in December, designed to go head-to-head with Nvidia's own data center accelerators. Meanwhile, Intel is building his Gaudi3 AI accelerator, which will also compete with his Nvidia product.
But it's not just AMD and Intel. Hyperscalers, including cloud service providers Microsoft (MSFT), Google (GOOG, GOOGL), Amazon (AMZN), and Meta (META), are ASICs or application-specific integrated circuits.
Think of AI graphics accelerators from Nvidia, AMD, and Intel as jack-of-all-trades. These can be used for a variety of AI-related tasks, allowing the chips to handle anything businesses need.
ASICs, on the other hand, are masters of a single transaction. They are built specifically for a company's unique AI needs and are often more efficient than graphics processing units from Nvidia, AMD, and Intel.
This is a problem for Nvidia because hyperscalers are spending a lot of money on AI GPUs. However, as hyperscalers focus on their own ASICs, the need for Nvidia's chips may fade.
Overall, though, Nvidia's technology is much more advanced than its competitors.
“They have a long-term research pipeline to continue to drive the future of GPU leadership,” Dekate explained.
Another thing to keep in mind about AI chips is how they are used. The first method is to train the model, which is called training. The second thing is to put these models into practice so that people can use them to produce the specific output they want, whether it's text, images, or another format entirely. . This is called inference.
OpenAI has ChatGPT inference and Microsoft has Copilot inference. Every time you send a request to either program, AI accelerators are used to generate the required text and images.
Over time, inference is likely to become the primary use case for AI chips as more companies look to leverage different AI models.
However, the AI explosion is only just beginning. And the vast majority of companies that will benefit from AI have not yet entered it. Therefore, even if Nvidia's market share takes a hit, revenues will continue to increase with the boom in the AI field.
daniel howley I'm the technology editor at Yahoo Finance. He has been covering the technology industry since his 2011. You can follow him on Twitter. @Daniel Howley.
Click here for the latest technology news impacting the stock market.
Read the latest financial and business news from Yahoo Finance