Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Anton Shilov

China Builds Exascale Supercomputer with 19.2 Million Cores

Tianhe

After the U.S. government imposed crippling sanctions against select Chinese high-tech and supercomputer companies through 2019 and 2020, firms like Huawei had to halt chip development; it is impossible to build competitive processors without access to leading-edge nodes. But Jiangnan Computing Lab, which develops Sunway processors, and National Supercomputing Center in Wuxi kept building new supercomputers and recently even submitted results of their latest machine for the Association for Computing Machinery's Gordon Bell prize.

The new Sunway supercomputer built by the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) employs around feature approximately 19.2 million cores across 49,230 nodes, reports Supercomputing.org. To put the number into context, Frontier, the world's highest-performing supercomputer, uses 9472 nodes and consumes 21 MW of power. Meanwhile, the National Supercomputing Center in Wuxi does not disclose power consumption of its latest system.

Interestingly, the new supercomputer seems to be based on the already known 390-core Sunway processor that derive from the Sunway SW26010 CPUs and have been around since 2021. Therefore, the new system increased the number of processors, but not their architectural efficiency, so its power consumption is likely to be gargantuan. Meanwhile, actual performance of the machine is unknown, since scaling out has its limits even in the supercomputer world.

The National Supercomputing Center in Wuxi has not disclosed performance numbers of its new supercomputer, and it is hard to make any estimations about its performance at this point. The reason why we called it ‘exascale’ is because its predecessor, the Sunway Oceanlite from 2021, was estimated to offer compute performance of around 1 ExaFLOPS.

Meanwhile, engineers revealed the workload that it used the machine for. Apparently, the the group created a new code for large whirlpool simulations to address compressible currents in turbomachinery. They applied it to NASA’s grand challenge problem using an advanced unstructured solver for a high-pressure turbine sequence with 1.69 billion mesh components and 865 billion degrees of freedom (variables).

Given how complex the simulation is, it is likely that the machine is indeed quite powerful. Meanwhile, there is no word whether the simulation was conducted with FP64 precision, or precision was sacrificed for the sake of performance.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.