As the months go by, the alleged presentation and market debut of NVIDIA graphics cards based on the Ada Lovelace architecture, which will make up the RTX 40 series, is getting closer and closer. Over the last few weeks we have reported various rumors relating to this new generation, according to which the flagship, presumably called RTX 4090, will be equipped with the AD102 GPU which should be equipped with 144 Streaming Multi-Processors, for a total of 18,432 CUDA core, coupled with 24GB of GDDR6X memory, plus 96MB of L2 cache. In addition, it appears that the card will need more power to operate, but thanks to the 12-pin PCI Express 5.0 connector it will be possible to deliver up to 600W.
Photo Credit: NVIDIA Recently, the well-known leaker kopite7kimi has published the potential memory configurations of the GeForce RTX 4090 and RTX 4070. Specifically, the RTX 4090 should be equipped, as previously mentioned, with 24GB of GDDR6X with a speed of 21Gb / s and, assuming a 384-bit bus is used, this means that it would have a maximum bandwidth of 1TB / s. , only 7.7% more than today's RTX 3090.| ); }
RTX 4090, PG137 / 139-SKU330, AD102-300, 21Gbps 24G GDDR6X, 600W
RTX 4070, PG141-SKU341, AD104-400, 18Gbps 12G GDDR6, 300W
- kopite7kimi (@ kopite7kimi) May 10, 2022
For the moment, we just have to wait a few more weeks to receive further official details from NVIDIA.
Photo Credit: NVIDIA Recently, the well-known leaker kopite7kimi has published the potential memory configurations of the GeForce RTX 4090 and RTX 4070. Specifically, the RTX 4090 should be equipped, as previously mentioned, with 24GB of GDDR6X with a speed of 21Gb / s and, assuming a 384-bit bus is used, this means that it would have a maximum bandwidth of 1TB / s. , only 7.7% more than today's RTX 3090.| ); }
RTX 4090, PG137 / 139-SKU330, AD102-300, 21Gbps 24G GDDR6X, 600W
RTX 4070, PG141-SKU341, AD104-400, 18Gbps 12G GDDR6, 300W
- kopite7kimi (@ kopite7kimi) May 10, 2022
For the moment, we just have to wait a few more weeks to receive further official details from NVIDIA.