close_game
close_game

AI’s chip wars are just getting started

By, New Delhi
Mar 23, 2024 06:04 AM IST

Tech companies are racing to dominate hardware for generative AI, with Nvidia leading the way with its Blackwell superchip. Competitors like AMD, Microsoft, Amazon, Meta, and Google are also in the game, aiming to reduce reliance on traditional chip makers. This shift in hardware focus is crucial as AI processing moves from the cloud to on-device, with companies like Intel, AMD, and Qualcomm integrating AI-specific chips into PCs and smartphones.

As tech companies pursue supremacy in generative artificial intelligence (AI), a related battle is now heating up: the battle to be the hardware leader. Dominating this physical aspect of computing will cement any company as a crucial cog as the world looks to novel and at-scale AI technologies. Tech companies are building chips not just for AI companies to train new generative models on, but also for consumer devices that will run applications and tools built on the same models.

NVIDIA's founder and CEO Jensen Huang (AFP)
NVIDIA's founder and CEO Jensen Huang (AFP)

Training AI models needs a lot of processing power. A couple of years ago, Nvidia commenced its gentle pivot from gaming and graphics chips to focus on hardware for AI. Little could many have foreseen then that Nvidia would touch a trillion-dollar valuation and that the H100 AI graphics chip would be in such demand that they would not be able to make them quickly enough. Jensen Huang, CEO of Nvidia, recently said that the new $30,000 (around 25,00,600) successor, the GB200 Grace Blackwell superchip, is an “engine to power this new industrial revolution.”

Nvidia’s Blackwell is a significant step forward, unlocking power to build, train, and run real-time generative AI on trillion-parameter large language models. Here’s an illustration – training a 1.8 trillion parameter model would have required as many as 8,000 previous-gen Hopper GPUs with power consumption of about 15 megawatts — the same task, however, can be achieved with just 2,000 Blackwell GPUs consuming 4 megawatts.

“Generative AI is the defining technology of our time. Working with the most dynamic companies in the world, we will realise the promise of AI for every industry,” said Huang, calling for greater AI collaboration while announcing the new chips this week.

The company needed to reinforce that advantage, as competitors are hard at work to build their own AI chips. AMD’s MI300 chips, which will be available for enterprises, start-ups, and cloud service providers this year, add more memory capacity and reduced power consumption to train and run large language models (LLMs). One of the chips, MI300A, will power the El Capitan supercomputer built by Hewlett Packard at the Lawrence Livermore National Laboratory.

Alongside, tech companies envision unique use-cases that would work better with custom chips. Microsoft’s Azure Maia 100 and Cobalt 100 chips, Amazon’s second-generation Trainium, Meta’s MTIA, and Google’s contentious Tensor Processing Units are examples of a growing industry-wide momentum to reduce reliance on traditional chip makers, including AMD and Intel, which often face challenges with production inventory and can demand a premium on the price tag.

The industry’s hardware pivot draws on successful examples from the consumer space – Apple’s M-series silicon for Macs and iPads delivers much better performance and efficiency than Intel’s customer-spec chips. Google has reaped rewards with AI integration using their Tensor chips in Pixel phones. Samsung worked with Qualcomm for custom Snapdragon chips for their latest flagship phones.

Microsoft’s Azure Maia and Cobalt chips could power much of its cloud services arsenal, including Copilot and Microsoft 365. They presently use chips from Intel, AMD, and Nvidia for cloud and AI. It is a similar template for the likes of Meta too. “MTIA provides greater compute power and efficiency than CPUs, and it is customized for our internal workloads,” noted Santosh Janardhan, Meta’s VP & Head of Infrastructure, when detailing their chips last year.

A pursuit for collaborative simplification led to the Open Compute Project (OCP), which late last year adopted standardization for the next generation of data formats for training AI models. Part of the OCP are Microsoft, AMD, Nvidia, Intel, Arm, Meta, and Qualcomm. Simplified formats allow AI silicon to execute calculations more efficiently, speeding up model training.

“As an industry, we have a unique opportunity to collaborate and realise the benefits of AI technology, which will enable new use cases. This requires a commitment to standardization for AI training and inference so that developers can focus on innovating,” says Ian Bratt, fellow and senior director of Technology at Arm.

For now, Blackwell’s customers are lining up, with confirmation that Amazon Web Services, Dell, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI will be adopting Nvidia’s new chip at some point this year. Though their own silicon efforts continue in parallel.

Pursuing AI phones and PCs

As incredibly powerful hardware works behind the scenes to train AI models that find a place on our phones and computing devices, wheels are in motion to transition a lot of this AI processing from cloud to ‘on-device’. It will need chips powering everyday devices for additional performance requirements. This method is also more attuned to data privacy since AI interactions would not, then, be transmitted online. “The renaissance of the personal computer,” as Huang called it when speaking at the HP Amplify conference this month.

Intel’s latest PC chips integrate something called a neural processing unit or NPU, alongside a CPU and GPU or graphics processing unit. Its sole task is to compute on-device all AI tasks. For now, PC makers are integrating functionality such as noise cancellation and webcam motion tracking for video calls, but there’s a long-term vision for NPU. Generative AI assistants such as Google Gemini and OpenAI’s ChatGPT will unlock an option for processing conversations locally. Microsoft’s Copilot in Windows is a prime candidate to lead this transition.

AMD is set to respond with the Ryzen 8040 series, expected in new PCs in the next few months. It also integrates an NPU, and AI processing is claimed to be 1.6 times faster than previous chips.

Though Samsung may have been most vocal about ‘AI phones’ with its latest flagship devices, it is Qualcomm’s chips providing the foundation. The company’s new Snapdragon 8s Gen 3 and Snapdragon 7+ Gen 3 mobile platforms further extend AI capabilities. The former can handle on-device generative AI up to 10 billion parameters. These platforms can now support LLMs, including Baichuan-7B, Llama 2, Gemini Nano, and Zhipu ChatGLM, a step forward for smartphones.

Get Current Updates on...
See more
Get Current Updates on India News, Weather Today, Latest News and Top Headlines from India.
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Sunday, December 15, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On