Back to top

Image: Bigstock

GOOGL, MSFT, AMZN & META Pose Threat to NVDA With In-House Chips

Read MoreHide Full Article

As the Artificial Intelligence (AI) race heats up primarily due to a boom in the generative AI space, the demand for cutting-edge chips that can handle AI-related massive workloads keeps rising. These chips are equipped with strong computing power that makes the training of large language models seamless.

In this backdrop, strong endeavors being made by the cloud trio — Amazon (AMZN - Free Report) , Microsoft (MSFT - Free Report) and Alphabet (GOOGL - Free Report) — as well as the social media giant Meta Platforms (META - Free Report) , to put their AI-specific customized chips on the table remain noteworthy.

This underscores the idea of staying self-sufficient in the intensifying AI battle in order to accelerate the development process of AI models, which, in turn, will help them pace up advancements in the AI world.

Will NVIDIA Face Challenges?

NVIDIA (NVDA - Free Report) , which currently sports a Zacks Rank #1 (Strong Buy), is riding on solid demand for its next-generation chips for AI models. Reportedly, it holds more than 90% of the AI chips market. You can see the complete list of today’s Zacks #1 Rank stocks here.

However, the strategy of building in-house chips of these tech giants in order to reduce dependency on outside chip providers does not bode well for NVIDIA. It will stiffen the chip competition for NVIDIA.

Also, NVDA will start losing market share if its major customers start making their chips. This is a major concern.

Notably, the waiting time for its flagship AI chip is months long, which is causing the scarcity. Nvidia can produce only a limited number of chips at a time because it depends on Taiwan Semiconductor to assemble the chip designs.

This is the primary reason why AI behemoths are now coming up with their chips.

Even if the chips being made by Google, Microsoft, Meta and Amazon are not as strong as NVIDIA’s, they can be customized. This will not only save their costs but will also save time.

YTD Price Performance

 

Zacks Investment Research
Image Source: Zacks Investment Research

 

How GOOGL, META, MSFT & AMZN Fare in AI Chip Race?

Alphabet’s Google recently heated the AI battle with the introduction of its central processing unit (CPU), namely Axion, to support its AI work in data centers. Axion processors are marked as the company’s first tailored Arm-based CPUs designed to deliver robust performance and energy efficiency. These CPUs deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances.

Alphabet, which carries a Zacks Rank #3 (Hold) at present, intends to scale Google services like BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine and the YouTube Ads platform on Axion soon.

Google will make Axion available to its cloud customers later this year. Customers will be able to use Axion in cloud services like Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow and Cloud Batch.

Following Google’s move, Meta announced advanced infrastructure for AI to push deep into the AI chip race. It introduced the next generation of the Meta Training and Inference Accelerator (MTIA), which is a family of customized chips designed for Meta’s AI workloads. MTIA will support new generative AI products and services, recommendation systems, and advanced AI research. Notably, MTIA is the Zacks Rank #2 (Buy) company’s advancement over MTIA v1 — its first-generation AI inference accelerator, which was designed in-house with Meta’s AI workloads.

Notably, MTIA more than doubles the compute and memory bandwidth. It is capable of managing the ranking and recommendation models that provide high-quality recommendations to users, seamlessly. The accelerator comes with an 8x8 grid of processing elements, which deliver increased dense compute performance (3.5x over MTIA v1) and a sparse compute performance.

Meanwhile, Microsoft unveiled two customized chips, namely Maia 100 and Cobalt 100, late last year. While the Arm-based Cobalt 100 is made for general computing tasks, Maia 100 is designed for AI purposes.

Notably, Maia 100 AI accelerator is capable of running cloud AI workloads like LLM training and inference. It is powered by a 5-nanometer TSMC process and has 105 billion transistors. The Zacks Rank #3 company collaborated with OpenAI on the design and testing phases of Maia. It is currently being tested on Bing’s AI chatbot Copilot, the GitHub Copilot coding assistant and GPT-3.5-Turbo.

Amazon, which carries a Zacks Rank #3, unveiled AWS Trainium2 chips made for training and running AI models. These chips are designed to power the highest-performance compute on AWS for training foundation models quickly at a lower cost, while consuming less energy. Further, they are capable of delivering up to 4 times better performance and two times better energy efficiency than the first-generation Trainium, which was introduced in 2020.

Notably, a cluster of 100,000 Trainium chips can train a 300-billion parameter AI large language model in weeks versus months. Trainium 2 chips will be available in Amazon EC2 Trn2 instances, containing 16 Trainium chips in a single instance.

Published in