Cerebras and Core42 start phase 2 for world’s largest AI supercomputers

Key Takeaways:

– Cerebras Systems and Core42 are constructing the world’s largest interconnected AI supercomputers called Condor Galaxy 2.
– The second phase of the Condor Galaxy network aims to achieve 36 exaFLOPs AI compute capacity.
– The partnership between Cerebras and Core42 provides huge amounts of compute, vast datasets, and specialized AI expertise.
– The supercomputers use entire silicon wafers to print cores, resulting in a total of 54 million cores in a single supercomputer.
– Cerebras and Core42 have doubled compute capacity and achieved 50% enhancement in training performance through software updates.
– The Condor Galaxy constellation is used for cutting-edge research in healthcare, energy, climate change, and AI-based studies.
– Plans are in place to deploy seven more supercomputers in 2024, reaching a total compute power of 36 exaFLOPs.
– Cerebras and Core42 are revolutionizing AI advancements and accelerating the pace of innovation.
– Export controls are in place for the powerful machines, requiring collaboration with the U.S. Department of Commerce.

VentureBeat:


Cerebras Systems and Core42 have started phase two of the construction of the world’s largest interconnected AI supercomputers dubbed Condor Galaxy 2.

The Cerebras supercomputers for accelerating generative AI will hit up to 36 exaFLOPs in partnership with Core42, a subsidiary of the UAE-based technology holding group G42, which is in Abu Dhabi.

The partners have announced the initiation of the second phase of the Condor Galaxy network. This ambitious network, comprising nine interconnected supercomputers, aims to reach a staggering milestone of 36 exaFLOPs AI compute capacity.

The completion of Condor Galaxy 1 has paved the way for the initiation of Condor Galaxy 2 (CG-2). This second phase, projected to achieve four exaFLOPs and incorporate 54 million AI-optimized compute cores, will expand the Condor Galaxy network to a total of eight exaFLOPs and 108 million cores upon completion. This advancement signifies the commitment of Cerebras and Core42 to constructing a constellation of AI supercomputers with unprecedented compute power.

Event

GamesBeat Next On-Demand 2023

Did you miss out on GamesBeat Next? Head to our on-demand library to hear from the brightest minds within the gaming industry on latest developments and their take on the future of gaming.


Watch Now

Rather than make individual chips for its centralized processing units (CPUs), Cerebras takes entire silicon wafers and prints its cores on the wafers, which are the size of pizza. These wafers have the equivalent of hundreds of chips on a single wafer, with many cores on each wafer. And that’s how they get to 54 million cores in a single supercomputer.

“Our strategic partnership with Cerebras Systems is propelling us toward our collective vision of establishing the world’s largest and fastest AI supercomputers,” stated Talal Alkaissi, chief product and global partnerships officer at Core42, in a statement. “Core42 handles massive and diverse datasets across healthcare, energy, and climate studies that challenge even the largest existing AI supercomputing systems. Training on CG-1 while building out CG-2 allows the training of cutting-edge foundational models, advancing critical research across various domains.”

The partnership between G42 and Cerebras delivers on all three elements required for training large models: huge amounts of compute, vast datasets, and specialized AI expertise. They are democratizing AI, enabling simple and easy access to the industry’s leading AI compute, and G42’s work with diverse datasets across healthcare, energy, and climate studies will enable users of the systems to train new cutting-edge foundational models.

Andrew Feldman, CEO of Cerebras, said in an interview with VentureBeat that the collaboration between Cerebras and Core42 has not only led to the doubling of compute capacity in Condor Galaxy-1 but also a 50% enhancement in training performance through software updates.

“What we’re announcing is a continuation of the success of this extraordinary partnership. We announced the partnership with Core42. In July, we announced we built a supercomputer,” Feldman said. “We built it on time as scheduled. We announced that we built nine.” of them and we’ve started our second and we’re training models that are moving the entire industry forward. We’re creating for the 400 million Arabic speakers a generative ad model in their own language.”

Andrew Jackson, chief AI officer at Core42, said in a statement that the exceptional efficiency and ease of use experienced while working with Cerebras CS-2s, marks a substantial leap in the rate of innovation from concept to solution.

Andrew Feldman is CEO of Cerebras Systems.

The application of the Condor Galaxy constellation for cutting-edge research spans crucial sectors such as healthcare, energy, climate change, and AI-based studies. The introduction of Med42, a leading generative AI medical assistant, and Jais 30B, the premier Arabic language LLM, demonstrates the groundbreaking strides made in pioneering work on CG-1. Additionally, AI-based climate studies and advancements in high-performance computing have been central to the pioneering work facilitated by the Condor Galaxy.

While the supercomputer show is this week, Cerebras isn’t showing up on the top 500 lists yet as the standard used for those lists is 64-bit double precision, which is not an AI test. But Feldman noted that the company is one of the largest supercomputers in the world still.

“In flops measured, we’re one of the largest in the world with just one and we’ve tied two together, and eventually all the way up to nine together,” Feldman said. “So the obvious question is what cool work can we do on this machine?”

The company is training AI models, and the model has more than a billion downloads on Hugging Face. It can be used as a medical assistant, and it’s being used at a university in Saudi Arabia in a supercomputer that aims to set a world record for seismic processing.

“Our equipment that we used at Argonne National Labs to accelerate particle transport. And here they published that we were 130 times faster than Nvidia GPUs. Then what we announced in a few what at the end of August was that we had built a 13 billion parameter model for an Arabic language model, and this was the state-of-the-art model,” Feldman said. “One of the things we’re announcing today is that we more than doubled the size of this model in less than eight weeks. And we’re putting into the open source community, the 30 billion parameter Arabic model, It’s head and shoulders above any other Arabic language model.”

Core42 is based in Abu Dhabi.

Plans to deploy seven more supercomputers—CG-3 through CG-9—in 2024 will contribute to achieving a total compute power of 36 exaFLOPs. This extensive constellation of supercomputers, involving 576 Cerebras CS-2 systems and over 654,000 AMD CPU cores, is set to revolutionize AI advancements on a global scale.

“With CG-1 now complete, we’re already seeing the impact and important contributions that this strategic partnership delivers,” said Feldman. “In partnership with Core42, we are changing the worldwide inventory of compute and using our combined expertise to advance AI work in a powerful way, to quickly and efficiently train leading LLMs.”

This is the first time Cerebras has partnered not only to build a dedicated AI supercomputer but also to manage and operate it. Condor Galaxy is designed to enable Core42 and its cloud customers to train
large, ground-breaking models quickly and easily, thereby accelerating the pace of innovation.

The Cerebras-Core42 strategic partnership has already advanced state-of-the-art AI models in healthcare with the introduction of Med42 the leading generative AI medical assistant, as well as brought Arabic Language chat to more than 400 million Arabic speakers through the introduction of Jais 30B, the premier Arabic language LLM. Pioneering work on CG-1 has also included AI-based climate studies as well as pioneering work in high-performance computing.

In addition to CG-1 and CG-2, Cerebras and Core42 previously announced their plans to deploy seven
additional supercomputers — CG-3 through CG-9 — in 2024 bringing the total compute power of the
Condor Galaxy to 36 exaFLOPs. This entails deploying 576 Cerebras CS-2 systems and feeding the cluster
with more than 654,000 AMD CPU cores. With 36 exaFLOPs in total, this unprecedented constellation of
supercomputers will revolutionize the advancement of AI globally. Access to CG-1 is available now.

The first machine was for exFLOP 64 machines, and the second was too and it will be completed in the first quarter. The company is in the planning phase for the third machine.

Asked about export controls, Feldman said that with such powerful machines the company has to work with the U.S. Department of Commerce. When the company ships equipment to the Middle East, it requires an export license. The company does not currently ship anything to China.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

Source link

AI Eclipse TLDR:

Cerebras Systems and Core42 have announced the initiation of the second phase of the construction of Condor Galaxy 2, the world’s largest interconnected AI supercomputers. The network, comprising nine interconnected supercomputers, aims to achieve a milestone of 36 exaFLOPs AI compute capacity. The completion of Condor Galaxy 1 has paved the way for the second phase, projected to achieve four exaFLOPs and incorporate 54 million AI-optimized compute cores. The partnership between Cerebras and Core42 aims to democratize AI and provide simple access to leading AI compute. The companies are working on various projects in healthcare, energy, climate studies, and AI-based research. The collaboration has already led to a doubling of compute capacity and a 50% enhancement in training performance. The companies plan to deploy seven more supercomputers in 2024, bringing the total compute power of Condor Galaxy to 36 exaFLOPs.