Nvidia’s rise to the top should surprise no-one
Aiden Heke
Chief Executive Officer, Decision Inc. Australia
AI will heavily drive demand for its chips.
Graphics Processing Units (GPUs) have undergone a remarkable transformation from hiding in the bowels of gamer PCs to becoming the brain behind generative artificial intelligence (AI) and subsequently a Wall Street darling.
Like Nvidia, the primary GPU protagonist, this meteoric rise seems to have happened from nowhere. It’s led to Nvidia holding a market position the envy of every organisation on the planet, a position akin to being the only organisation that can create oxygen.
But if you dive deeper into their story, you can see this overnight success has been building for years. And for Australian organisations looking to dive into the AI profit pool, this market dominance risks us being lapped amid an emerging chip scarcity.
Nvidia had built up an enviable reputation for its ability to deliver a market-leading product that could deliver incredible parallel processing power, and that’s not by accident.
CEO Jensen Huang’s foresight in steering Nvidia towards AI was grounded in the belief that GPUs were ideal for AI’s computational demands.
This wasn’t just about creating a new product but about envisioning a future where Nvidia’s GPUs would be at the heart of AI.
So they went about making it happen.
Nvidia pushed its chips (no pun intended) into the middle of the table when it developed CUDA, a comprehensive platform that extended GPU capabilities beyond gaming to support complex computations needed for AI and deep learning.
A key cog in Nvidia’s gameplan was ensuring CUDA was closely tied to its hardware, which meant that any advancements in AI and deep learning fuelled by CUDA drove concurrent demand for Nvidia’s GPUs.
This closed, integrated ecosystem approach, similar to Apple’s product strategy, created a deep competitive moat and revenue stream.
Beyond just hardware, Nvidia also invested in creating an entire ecosystem around CUDA, including libraries, SDKs, and tools that made it easier for developers to build AI applications.
The significance of Nvidia’s shift became starkly apparent with the success of AlexNet in 2012, an AI model that dramatically outperformed existing technologies in image recognition.
This success, powered by CUDA on Nvidia GPUs, marked a “big bang” moment for AI, validating Nvidia’s bet on CUDA and setting the stage for the company’s explosive growth in the AI field.
This brought to the fore pivotal researchers and individuals of the industry, including Alex Krizhevsky, the primary lead of the winning AlexNet team; a PhD student who collaborated with Ilya Sutskever (you may recognise of OpenAI notoriety) and Geoff Hinton, one of the main proponents of ironically slowing down AI development while global guard rails are codified.
Nvidia’s vision extended beyond individual GPUs to an integrated data centre architecture optimised for AI.
By offering a comprehensive stack of created and acquired hardware and software, Nvidia solutions leverage AI at scale, facilitating advancements in cloud computing, autonomous technology, and more.
But is their position insurmountable?
Who else is emerging in this critical race – some would say for humanity – to innovate fastest?
These companies are pushing AI hardware’s potential to its limits, seeking to bring to market solutions that address the ever-evolving requirements of AI applications.
As recently as last week, Sam Altman of OpenAI was reported to be seeking in the vicinity of $7 trillion (Yes, TRILLION – amazingly, that’s not a misprint) to design and manufacture AI chips and reduce the reliance on Nvidia.
In reality, what he’s told people privately, according to The Information, is that the quoted figure represents the sum total of investments that investors would need to make, from real estate and power for data centres to the manufacturing of the chips, over time.
But this rising up of potential competitors this has not dented Nvidia stock; in fact, its valuation surpassed the entire Chinese stock market this week; this was much to my extreme sadness, after I considered investing in them and ultimately not doing so in April last year.
But already we see a lag in AI chip availability in some quarters of the industry in Australia, and this chip scarcity threatens Australia’s ability to benefit economically from AI development.
Now, Nvidia has mega competitors, like Meta, Google and Amazon, and smaller, newer entrants, nipping at their heels.
And Databricks’ CEO Ali Ghodsi has gone ahead and predicted that prices for these chips will plummet over the next year.
But this is a far more complex equation than trying to create the fastest, most efficient chips available.
Nvidia has built an industry platform, akin to Apple’s formidable competitive moat.
More entrants are needed, more chips need to become available and, most importantly, they need to be the equivalent in terms of speed and efficiency to what Nvidia provides, otherwise Australia could be ranked among the have-nots, competing with the haves.
We need to watch this space carefully, pay attention to the new entrants and their end-to-end propositions or our AI lights in Australia may go out before they can fully shine.
Original article source: Information Age