Cerebras Systems and G42, a UAE-based technology holding group, have jointly announced the introduction of Condor Galaxy, an innovative network of nine interconnected supercomputers.
The primary goal of this network is to transform AI computing by significantly reducing AI model training time. The first supercomputer in this network, called Condor Galaxy 1 (CG-1), boasts a capacity of 4 exaFLOPs and is equipped with 54 million cores.
The collaboration between Cerebras and G42 aims to deploy two more AI supercomputers, CG-2 and CG-3, in the United States in early 2024. With a planned total capacity of 36 exaFLOPs, this interconnected supercomputing network is expected to drive groundbreaking advancements in AI on a global scale.
According to Talal Alkaissi, CEO of G42 Cloud, the subsidiary of G42: “Collaborating with Cerebras to rapidly deliver the world’s fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting.
“This partnership brings together Cerebras’ extraordinary compute capabilities, together with G42’s multi-industry AI expertise. G42 and Cerebras’ shared vision is that Condor Galaxy will be used to address society’s most pressing challenges across healthcare, energy, climate action and more.”

Located in Santa Clara, California, CG-1 links 64 Cerebras CS-2 systems into a single, user-friendly AI supercomputer with 4 exaFLOPs of AI training capacity. Cerebras and G42 offer CG-1 as a cloud service, simplifying access to high-performance AI compute for customers without the need to manage or distribute models over physical systems.
This collaboration represents Cerebras’ first instance of both building and managing a dedicated AI supercomputer. The design of CG-1 aims to enable G42 and its cloud customers to train large and groundbreaking models efficiently, thus accelerating innovation in various domains like Arabic bilingual chat, healthcare, and climate studies.
“Delivering 4 exaFLOPs of AI compute at FP 16, CG-1 dramatically reduces AI training timelines while eliminating the pain of distributed compute,” said Andrew Feldman, CEO of Cerebras Systems.
“Many cloud companies have announced massive GPU clusters that cost billions of dollars to build, but that are extremely difficult to use. Distributing a single model over thousands of tiny GPUs takes months of time from dozens of people with rare expertise. CG-1 eliminates this challenge.”
“Setting up a generative AI model takes minutes, not months and can be done by a single person. CG-1 is the first of three 4 exaFLOP AI supercomputers to be deployed across the US. Over the next year, together with G42, we plan to expand this deployment and stand up a staggering 36 exaFLOPs of efficient, purpose-built AI compute,” he added.

G42’s work with diverse datasets across healthcare, energy, and climate studies aims to facilitate the training of cutting-edge foundational models.
CG-1, optimised for Large Language Models and Generative AI, offers native support for training with long sequence lengths and does not require complex distributed programming languages. It operates at 4 exaFLOPs of 16-bit AI compute and can support models with up to 600 billion parameters, extendable to 100 trillion parameters.
The Condor Galaxy network’s expansion plan includes two more AI supercomputers, CG-2 and CG-3, to be deployed in the US. These three supercomputers will form a distributed AI supercomputer with a total compute power of 12 exaFLOPs and 162 million cores.
The project aims to bring online six additional Condor Galaxy supercomputers in 2024, reaching a staggering total compute power of 36 exaFLOPs.
Condor Galaxy’s name draws inspiration from the NGC 6872 galaxy, also known as Condor Galaxy. The galaxy spans an astounding 522,000 light-years, making it approximately five times larger than the Milky Way. Positioned in the Pavo constellation and located 212 million light-years from Earth.