NVIDIA GeForce RTX 4090 Benchmark Leak Shows A 60 Percent Gain Over RTX 3090 Ti

rtx 4090 slide
It's the first of October, which means there's only 11 days left before you can buy your own GeForce RTX 4090—assuming your pockets are deep enough, anyway. If you've got the dosh but are on the fence regarding whether to dole it out for a dominant GPU, here's one more data point for you: a Geekbench 5 leak.

4090 geekbench 5 compute cuda

Today's leak comes courtesy of the tearless retina of the Benchleaks bot on Twitter. Somebody with an ASRock X670E Taichi Carrera and a Ryzen 9 7950X (already quite an enviable setup) slapped a GeForce RTX 4090 into the machine and ran Geekbench's compute benchmark using CUDA. They did it at least twice, and the two results were very close, with a maximum performance of 424,332 points.

We don't normally test Geekbench Compute because we have other benchmarks for compute workloads. However, a quick perusal through the Geekbench database finds us the result below for a GeForce RTX 3090 Ti:

3090 ti geekbench 5 compute cuda

There are higher RTX 3090 Ti results, but this one seems to be in the range where most of the CUDA tests fall. Comparing this result to the RTX 4090 gives us roughly a 60% uplift over the previous-generation part. That's an absolutely crazy uplift in performance, and good news for folks that use CUDA-based software for production work.

Naturally, a huge portion of the uplift comes from the fact that the RTX 4090 is a much wider GPU than the RTX 3090 Ti, with slightly more than half-again the CUDA core count. However, the RTX 4090 is also running at a much higher clock rate: 2.58 GHz, compared to the 1.92 GHz clock of the previous-generation part.

ampere vs ada sep 21 22
Given that theoretical compute more-than doubled, maybe this result isn't so great after all?

Keep in mind that these are CUDA-based compute tests, and have little-to-no correlation with graphics performance. Many of the innovations in the Ada Lovelace architecture, such as Shader Execution Reordering, won't benefit the new chip in this kind of test. Likewise for the tensor and RT core improvements.

How impressive these results actually are also depends somewhat on the power consumption of the parts, which we'll have to test for ourselves. Look for our review of the RTX 4090 in the coming weeks!