TechUnveiling the WSE-3: Cerebras Reveals the World's Fastest AI Chip with Nearly...

Unveiling the WSE-3: Cerebras Reveals the World’s Fastest AI Chip with Nearly One Million Cores Targeting Nvidia

Cerebras Systems has introduced the WSE-3, which is recognized as the “most rapid AI chip globally.” The WSE-3, the powerhouse behind the Cerebras CS-3 AI supercomputer, is said to provide double the performance of its predecessor, the WSE-2, all while maintaining the same power consumption and cost.

This new chip has the ability to train AI models with up to 24 trillion parameters, showcasing a significant advancement from past iterations. It is constructed on a 5nm TSMC process and incorporates 44GB on-chip SRAM. The WSE-3 consists of four trillion transistors and 900,000 AI-optimized compute cores, resulting in a peak AI performance of 125 petaflops – an equivalent performance level to about 62 Nvidia H100 GPUs.

The CS-3 supercomputer, driven by the WSE-3, is purposefully engineered to train cutting-edge AI models, ten times larger than GPT-4 and Gemini. With a memory capacity of up to 1.2 petabytes, it has the capability to house 24 trillion parameter models in a single logical memory space, simplifying the training process and amplifying developer efficiency.

CS-3 supercomputer

Cerebras has stated that its CS-3 supercomputer is designed to cater to both enterprise and hyperscale requirements, with superior power efficiency and software simplicity. It requires significantly less code (97% less) compared to GPUs for large language models (LLMs).

Andrew Feldman, CEO and co-founder of Cerebras, expressed, “WSE-3 is the top-ranking AI chip globally, meticulously crafted for the latest breakthroughs in AI work, from variety of experts to 24 trillion parameter models. We are excited to introduce WSE-3 and CS-3 to the market to tackle today’s most pressing AI challenges.”

Cerebras has already received a substantial number of orders for the CS-3 from enterprise, government, and international cloud sectors. In addition, the CS-3 will play a vital role in the collaboration between Cerebras and G42. Their joint efforts have resulted in 8 exaFLOPs of AI supercomputer power via Condor Galaxy 1 and 2. A third installation, Condor Galaxy 3, is currently in progress and will include 64 CS-3 systems, generating 8 exaFLOPs of AI computation.

More from TechRadar Pro

  • ‘World’s smallest’ Nvidia AI server launched but barely anyone noticed
  • World’s largest chip gets beefier: 850 thousand cores for AI
  • PC with six Nvidia RTX 4090 GPUs and liquid cooling finally gets tested

Receive the latest news, opinions, features, and advice essential for your business success by subscribing to the TechRadar Pro newsletter!

» …

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe Today

GET EXCLUSIVE FULL ACCESS TO PREMIUM CONTENT

SUPPORT NONPROFIT JOURNALISM

EXPERT ANALYSIS OF AND EMERGING TRENDS IN CHILD WELFARE AND JUVENILE JUSTICE

TOPICAL VIDEO WEBINARS

Get unlimited access to our EXCLUSIVE Content and our archive of subscriber stories.

Exclusive content

Latest article

More article