Valve Manages To Wring A Decent VR Experience Out Of A 4 Year Old GeForce GTX 680 In GDC Demo

With the Oculus Rift, HTC's Vive and Sony's PlayStation VR soon to hit the market, we're about to enter the modern era of Virtual Reality. We've been talking a lot about various solutions over the past couple of months, as well as other things related to the VR ecosystem. A consistent message we've heard is that to enjoy VR to its fullest on the PC, a powerful system is going to be needed, especially with respect to graphics. As unfortunate as it might be, VR workload is taxing on a system and generally requires more expensive hardware under the hood, in addition to the cost of VR kit with its head-mounted display and cameras.

If you're sporting a fairly modern higher-end GPU, you're probably fine for VR. If you want to be sure, you can take advantage of a benchmark that Valve released a couple of weeks ago. If you want a "great" experience, you're going to need a modern high-end GPU, or even two if you're planning to go really high-end.

HTC Vive Kit

But let's back this virtual truck up for a minute, because Valve's Alex Vlachos held a talk at GDC last week that illustrated that modern VR doesn't have to require top-of-the-line computation horsepower.

If you don't mind getting your hands dirty, you can peruse Alex's full talk slide at the URL below - but be warned: it gets deep. Ultimately, what Alex's work has revolved around is getting VR to work on an older, legacy GPU technology. In this case, his target was the NVIDIA GeForce GTX 680, a card that came out almost exactly four-years-ago. How on Earth could an old GPU like that support VR? By adapting, of course.

Alex Vlachos Adaptive VR

The solution Alex came up with was one where a piece of VR software could adjust itself on-the-fly to better suit the hardware it's given, either high-end or low-end. In the slide above, we can see that at the simplest level, this could mean decreasing anti-aliasing or lowering the resolution in order to get VR to work on modest hardware. While that might seem simple, the process of figuring out what should be shown in the render is actually quite complicated (as Alex's presentation attests to).

There are two main goals with this adaptive approach: to reduce the chance of dropped frames, and increase the image quality when there are idle GPU cycles. Thanks to these mechanics, if enough GPU power is available, the quality could adjust itself upwards; likewise, the inverse can also happen. Again, these adjustments are made on-the-fly, during gameplay. If you're using low-end hardware, for example, and are looking at an area that requires little GPU computation, the adaptive algorithms could automatically increase the detail level. In effect, regardless of where you are looking in game, Alex's algorithms figure out how to render the best possible IQ and framerate, combined.

If you're thinking "if only this could be a real thing," and not just a demo, it will be shortly. In a few weeks, Valve will be making its "adaptive quality" plugin available in Unity for game developers to take advantage of. Valve deserves some major kudos here, because it could have very well kept the tech exclusive for Source 2 or other Valve titles and we probably would have understood that decision. It didn't have to give out a plugin as free source but the company is. And it could be a huge help in offering better VR experiences on mainstream hardware.