Intel Sapphire Rapids, new details on fourth generation scalable Xeons

Intel Sapphire Rapids, new details on fourth generation scalable Xeons

Intel Sapphire Rapids

VideoCardz colleagues have published a slide on the alleged Intel roadmap, offering further details regarding the fourth generation scalable Xeon CPUs, known by the codename "Sapphire Rapids". The Santa Clara company has always envisioned its Sapphire Rapids processors and the Eagle Stream platform as revolutionary products. Coupled with Intel Xe-HPC “Ponte Vecchio” GPUs, Sapphire Rapids CPUs will be the basis of Intel's first exascale supercomputer built according to an AI + HPC paradigm.

Credit: VideoCardz Sapphire Rapids CPUs will adopt a design a multi-chip module (or rather multi-chiplet module) with four identical chips positioned next to each other and inserted into a package using Intel's EMIB technology. Each chip contains 14 Golden Cove cores, the same ones used on Alder Lake CPUs. However, these cores will feature numerous improvements aimed at data centers and supercomputers over their desktop counterparts. Specifically, Sapphire Rapids will support Advanced Matrix Extensions (AMX), AVX512_BF16 instructions for deep learning, Intel's Data Streaming Accelerator (DSA), architectural LBRs (last branch recording) and HLAT. The maximum number of cores supported by Sapphire Rapids will be 56, but there will of course be models with 44, 28 or even 24 cores.

As for memory, Sapphire Rapids will support HBM2E, DDR5 and Optane Persistent Memory modules 300 series (codenamed Crow Pass). At least some Sapphire Rapids CPU models will support up to 64GB of HBM2E DRAM, offering 1TB / s of bandwidth per socket. The processors will also be equipped with eight channels of DDR5-4800 memory.

Intel Sapphire Rapids processors will be built using the company's 10nm Enhanced SuperFin technology. As for power consumption, their maximum TDP will reach 350W (up from Ice Lake-SP's 270W). Targeting a wide variety of workloads, the Eagle Stream platform will support one, two, four and eight LGA4677 sockets. The CPUs will use the UPI 2.0 interface which will provide data transfer rates of up to 16GT / s, compared to the current 11.2GT / s. Each CPU will have up to four UPI 2.0 links.

Credit: AdoredTV As for the other enhancements, Intel Sapphire Rapids processors will support up to 80 PCIe 5.0 lanes (with x16, x8, x4 bifurcation) at 32GT / if a PCIe 4.0 x2 link. In addition to PCIe Gen5, CPUs will support the CXL 1.1 protocol to optimize CPU-to-device (for accelerators) and CPU-to-memory (for storage devices and memory expansions) communications.

Obviously, not it is possible to verify the authenticity of the slide published by VideoCardz and we will probably have to wait a long time before an official presentation of these CPUs, since they should only arrive on the market in several months.

If you are looking for an SSD with good technical characteristics and RGB lighting, ADATA also offers the 512GB XPG AS40G, which you can buy on Amazon at an attractive price.






Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has spent the last few years challenging Intel across the desktop, server, and mobile markets, but the gap between the two companies is arguably largest in server. At present, AMD ships up to 64 cores in a single socket, where Intel has only stepped up to shipping 40 cores this week with the launch of Ice Lake SP. Previous Intel Cascade Lake CPUs topped out at 28 cores. A new leak suggests Intel’s next-generation CPU platform, codenamed Sapphire Rapids, will finally seek to reduce some of the gaps between itself and AMD’s Epyc.


As always, take this leak with your daily dose of salt. This slide comes from VideoCardz and it builds on some data we’ve previously seen.


Intel-Xeon-Sapphire-Rapids-Specifications


Sapphire Rapids, when it launches, will (supposedly) specify another TDP increase — up to 350W, this time. AMD’s current “Milan” CPUs top out at 280W, just like Rome. Memory support moves to DDR5, as expected, and the slide claims Sapphire Lake offers 1TB of bandwidth on 64GB of HBM2E. We knew Sapphire Lake was going to offer HBM2E as an option, but 64GB of on-die memory with 1TB/s of bandwidth is huge. It’d be really interesting to see how system performance scaling changes with this configuration compared with models without HBM2.


A top-end Sapphire Rapids, if these rumors are accurate, would offer a small pool of ultra-high bandwidth memory, backed by a far larger pool of lower-bandwidth memory. An eight-channel DDR5 system using DDR5-4800 would offer 307.2GB/s of memory bandwidth to up to 4TB of RAM (assuming Intel retains existing Ice Lake SP limits).


Sapphire Rapids is said to feature up to 80 PCIe 5.0 lanes on some SKUs, with others limited to just 64 lanes. It’s a four-tile design. This meshes with what we’ve learned about Intel’s plan for tiles, which are roughly analogous to AMD’s chiplets, but with different strategies for I/O, package routing, and interconnects.


As for when these chips will be in-market, that’s a little hard to read right now. Intel has made noise about shipping Sapphire Rapids in 2021, but we’ve also heard that the chip wasn’t likely to launch before 2022. In the past, there used to be a large difference between TSMC and Intel when it came to the question of “volume production.” That difference is shrinking.


Intel would use the term only a few months before a chip went on sale, while TSMC might announce volume production as long as a year before chips became available to consumers. Intel claimed to be in volume production for Ice Lake SP in January 2021 and launched in April, but reports from Dell suggest servers with the CPU won’t be available until May, and that this is “in sync with Intel’s timelines.” A January volume announcement with May availability is a four-month delay. That’s longer than is typical for Intel.


As of this writing, we’re guessing Sapphire Rapids will sample in 2021 but not launch until 2022. It’ll compete in-market against a mixture of Milan and Genoa parts. Genoa is expected to be built on 5nm and to use AMD’s Zen 4 architecture. There are rumors of a further core count increase, up to 96 cores, but that may or may not be true.


With Zen 3, AMD focused on improving Infinity Fabric performance and clock speeds, but it wound up spending significantly more power on “uncore” activities than Rome did. The company could choose to focus on improving IF and CPU efficiency with Zen 4 and hold core counts equal, or it may opt to take advantage of 5nm’s density improvements and push core counts once again. 96 cores without HBM2 and 12 memory channels tackling 56 cores with HBM2 and eight memory channels? Sounds fascinating to us.


This slide also mentions third-generation Optane, aka Crow Pass, and claims bandwidth could be improved by up to 2.6x in mixed read/write scenarios. None of the news regarding Optane has been good lately, to the point that we’re watching to see if Crow Pass even comes to market. Assuming that it does, however, it looks like the memory standard will finally get a real performance kick. No word on whether Crow Pass supports PCIe 4.0 or PCIe 5.0, but Intel is clearly pushing to get Xeon back on a competitive footing. Ice Lake SP is a solid effort for Chipzilla, but it doesn’t entirely close the gap with AMD. Sapphire Rapids gives Intel another shot at doing so.


Now Read: