Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Cerebras Systems is unveiling Andromeda, a 13.5 million-core artificial intelligence (AI) supercomputer that can operate at more than an exaflop for AI applications.
The system is made of servers with wafer-size “chips,” each with hundreds of thousands of cores, but it takes up a lot less space and is a lot more powerful than ordinary servers with standard central processing units (CPUs).
Sunnyvale, California-based Cerebras has a radically different way of building chips. Most chips are built on a 12-inch silicon wafer, which is processed with chemicals to embed circuit designs on a rectangular section of the wafer. Those wafers are sliced into individual chips. But Cerebras basically uses a huge rectangular section of a wafer to create just one massive chip, each with 850,000 processing cores on it, said Andrew Feldman, CEO of Cerebras, in an interview with VentureBeat.
Andromeda can do an exaflop in AI computing.“It’s one of the largest AI supercomputers ever built. It has an exaflop of AI compute, 120 petaflops of dense compute. It’s 16 CS-2s with 13.5 million cores. Just to give you an idea, the largest computer on earth, Frontier, has 8.7 million cores.”
By contrast, Advanced Micro Devices’ high-end 4th Gen Epyc server processor had one chip (and six memory chiplets) with just 96 cores. All told, the Andromeda supercomputer assembles its 13.5 million cores by combining a cluster of 16 Cerebras CS-2 wafer-based systems together.
“Customers are already training these large language models [LLMs] — the largest of the language models — from scratch, so we have customers doing training on unique and interesting datasets, which would have been prohibitively time-consuming and expensive on GPU clusters,” Feldman said.
It also uses Cerebras MemoryX and SwarmX technologies to achieve more than one exaflop of AI compute, or a 1 followed by 18 zeroes, or a billion-billion. It can also do 120 petaflops (1 followed by 15 zeroes) of dense computing at 16-bit half precision.
Andromeda, pictured with the doors closed, is a 13.5 million core AI supercomputer.
The company unveiled the tech at the SC22 supercomputer show. While this supercomputer is very powerful, it doesn’t qualify on the list of the Top 500 supercomputers because it doesn’t use 64-bit double precision, said Feldman. Still, it is the only AI supercomputer to ever demonstrate near-perfect linear scaling on LLM workloads relying on simple data parallelism alone, he said.
“What we’ve been telling people all year is that we want to build clusters to demonstrate linear scaling across clusters,” Feldman said. …