GTC '14: NVIDIA Outs Pascal, Titan Z, Tegra Updates


GTX Titan Z, IRAY VCA, Erista Tegra

GeForce GTX Titan Z:
Jen-Hsun also used his opening keynote to show off NVIDIA’s most powerful graphics card to date, the absolutely monstrous GeForce GTX Titan Z.

NVIDIA claims the Titan Z is designed for “next-generation 5K and multi-monitor gaming”. We haven’t seen any hard performance data just yet, but if the preliminary specifications are anything to go by, the Titan Z is going to have no trouble powering though the latest games and resolutions beyond 4K should be no problem.

The upcoming GeForce GTX Titan Z is powered by a pair of GK110 GPUs, the same chips that power the GeForce GTX Titan Black and GTX 780 Ti. All told, the card features 5,760 CUDA cores (2,880 per GPU) and 12GB of frame buffer memory—6GB per GPU. NVIDIA also said that the Titan Z’s GPUs are tuned to run at the same clock speed, and feature dynamic power balancing so neither GPU creates a performance bottleneck. NVIDIA also claims the card runs cool and quiet, thanks in part to low-profile components and ducted baseplate channels that minimize turbulence and improves the acoustic qualities of the cooler.

Jen-Hsun said this of the GeForce GTX Titan Z, “If you’re in desperate need of a supercomputer that you need to fit under your desk, we have just the card for you!” And NVIDIA will only charge you $2,999 for one of these monsters. We’ll take two, thank you...


NVIDIA Iray VCA:
Also unveiled at GTC was the NVIDIA Iray VCA. According to NVIDIA, the Iray Visual Computing Appliance (VCA) “combines hardware and software to greatly accelerate the work of NVIDIA Iray -- a photorealistic renderer integrated into leading design tools like Dassault Systèmes' CATIA and Autodesk's 3ds Max.”

Because the appliance is scalable, multiple units can be linked together, speeding up the simulation of light bouncing off surfaces in the real world, i.e. ray tracing. Each Iray VCA features 8 GPUs (totaling 23,000 cores), each paired to 12GB of memory. And the appliance themselves are linked via 10Gbe and Infiniband.

To demonstrate the capabilities of Iray VCA, Jen-Hsun brought out an engineer from Honda who showed off a highly-detailed Honda Accord being rendered in real time, using 19 VCAs. "For our styling design requirements, we developed specialized tools that run alongside our RTT global standard platform," said Daisuke Ide, system engineer at Honda Research and Development. "Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably."

Iray VCA systems will be available sometime this summer though certified system integrators, and will include CADnetwork, Fluidyna, IGI and migenius. The appliance will be priced at $50,000 in North America, and will include an Iray license and the first year of maintenance and updates.
 

Machine Learning:
Another topic discussed at GTC was machine learning. Jen-Hsun talked about a number of companies, like Adobe, Baidu, Netflix, and Yandex that use NVIDIA CUDA-based GPU accelerators to search and analyze huge datasets, to provide things like intelligent image analysis and personalized movie recommendations.



  

Machine learning algorithms are used to train computers to essentially teach themselves by sifting through mountains of data and making intelligent comparisons. For example, a machine learning computer can learn to identify a fox by analyzing lots of images of dogs, ferrets, jackals, raccoons and other animals, including foxes, in the same way that humans learn. A demo which featured photos of random dogs tweeted to the presenter was used to show how the machine learning algo running on an array of NVIDIA Teslas could identify the actual breed, very quickly, not simply identify the animal as a dog.

Jetsen TK1 and Erista:
No GTC would be complete without discussing Tegra and its roadmap. NVIDIA’s CEO also used the opening keynote address to show off the Tegra K1 based Jetson TK1 devkit and announce Erista, the codename for a future Tegra-branded SoC. (Perhaps the Tegra M1?)

The $192 Jetsen TK1 devkit features a Tegra K1 SoC and includes 2GB memory and I/O connectors for USB 3.0, HDMI 1.4, Gigabit Ethernet, audio, SATA, miniPCIe and an SD card slot. Jetson TK1 Developer Kit also includes a full C/C++ toolkit that leverages NVIDIA CUDA technology and it supports NVIDIA’s VisionWorks toolkit as well, which provides a rich set of computer vision and image processing algorithms. The Jetsen TK1 is designed to bring CUDA’s capabilities to areas such as robotics, augmented reality, computational photography, human-computer interface and advanced driver assistance systems (ADAS).

There weren’t many details given on Erista, but Jen-Hsun did say that it was “right around the corner” and that it would feature a Maxwell-based GPU. Through enhancements to the CPU and GPU architecture and presumably its manufacturing process, Erista will be higher performance and more energy efficient than any previous Tegra SoC. Availability wasn’t specifically discussed, but the roadmap slide shows Erista arriving sometime before the end of 2014 or in early 2015.

We'll be out here covering GTC for the next few days, so stay tuned to HotHardware for more news from the conference as it breaks.
 


Related content