The latest announcement on the world of supercomputers comes from the Berkeley Lab of the National Energy Research Scientific Computing (NERSC): a machine that combines deep learning with incredible simulation capabilities. Inside are AMD's top-of-the-range server CPUs, the 64-core EPYC 7763, while graphics processing has been entrusted to Nvidia's A100 GPUs. By combining these components, the researchers were able to come up with 180 "standard" performance PetaFLOPS and even a maximum of 4 AI performance ExaFLOPS, managing to place this supercomputer in second place among the fastest in the world, behind only to Fugaku, the Japanese supercomputer.
NERSC director, Sudip Dosanjh, stated that “Perlmutter will provide a greater range of applications than previous NERSC systems, as well as being the first NERSC supercomputer designed since start to satisfy the needs both in terms of simulations and data analysis. " Perlmutter is based on HPE's heterogeneous Cray Shasta architecture, and will be delivered in two separate phases.
In the first phase, 12 heterogeneous cabinets comprising 1,536 nodes: each node will contain an AMD EPYC 7763 “Milan” CPU 64 -core, 256GB of DDR4 SDRAM and four 40GB Nvidia A100 GPUs connected via NVLink. The 35 PB all-flash memory will deliver 5 TB / s throughput. During this first phase, Perlmutter will be able to generate 60 FP64 PetaFLOPS for simulations and 3,823 FP16 ExaFLOPS for analysis and deep learning.
If all this were not enough, the system further expands its performance in the second phase: 3,072 nodes dedicated exclusively to CPUs will be added, with 512 GB of memory for each node dedicated to simulations. FP64 performance during the second phase is expected to reach 120 PFLOPS, for a total of 180 PFLOPS if added to those of the first phase. Surely many, but still far from the world record of Fugaku, which holds the first position with all its 442 PFLOPS.
Amazon offers you the Huawei MatePad T8 currently discounted by 30%!
NERSC director, Sudip Dosanjh, stated that “Perlmutter will provide a greater range of applications than previous NERSC systems, as well as being the first NERSC supercomputer designed since start to satisfy the needs both in terms of simulations and data analysis. " Perlmutter is based on HPE's heterogeneous Cray Shasta architecture, and will be delivered in two separate phases.
In the first phase, 12 heterogeneous cabinets comprising 1,536 nodes: each node will contain an AMD EPYC 7763 “Milan” CPU 64 -core, 256GB of DDR4 SDRAM and four 40GB Nvidia A100 GPUs connected via NVLink. The 35 PB all-flash memory will deliver 5 TB / s throughput. During this first phase, Perlmutter will be able to generate 60 FP64 PetaFLOPS for simulations and 3,823 FP16 ExaFLOPS for analysis and deep learning.
If all this were not enough, the system further expands its performance in the second phase: 3,072 nodes dedicated exclusively to CPUs will be added, with 512 GB of memory for each node dedicated to simulations. FP64 performance during the second phase is expected to reach 120 PFLOPS, for a total of 180 PFLOPS if added to those of the first phase. Surely many, but still far from the world record of Fugaku, which holds the first position with all its 442 PFLOPS.
Amazon offers you the Huawei MatePad T8 currently discounted by 30%!