NVIDIA CEO Hands First Volta GPU Accelerators To 15 Top AI Researchers And Includes Autograph

NVIDIA Tesla V100

Remember that feeling you would get when trick-or-treating at Halloween and scoring a full-size candy bar from a generous neighbor? It was like hitting the mother load or finding gold at the end of the rainbow. As you get older, it takes more than a Snickers to get excited, but we imagine that same feeling emanated from a group of researchers who were surprised with a free Telsa V100 GPU accelerator from NVIDIA CEO Jensen Huang.

For several years now NVIDIA has been making a push into deep learning and artificial intelligence. The company's latest effort involved gathering elite deep learning researchers from 15 different companies at CVPR to unveil the Tesla V100, NVIDIA's latest GPU based on its Volta architecture. What the 15 attendees were not expecting was to walk home with one, but each of them did.

"AI is the most powerful technology force that we have ever known," said Jensen, who came dressed in what he calls his Aloha uniform—short sleeve dress shirt, white jeans, and Vans. "I’ve seen everything. I’ve seen the coming and going of the client-server revolution. I’ve seen the coming and going of the PC revolution. Absolutely nothing compares."

Autographed Tesla V100

Each Tesla V100 that was given out included his signature and an inscription on the box that read, "Do great AI!" It is a mighty accelerator featuring NVIDIA's Volta GV100 GPU, which itself packs an eye-popping 21.1 billion transistors on a die that measures 815mm2. To put that into a perspective, a full-fat Pascal GP100 GPU features 12 billion transistors on a 610mm2 die. Each GV100 GPU is built on a 12nm FinFET manufacturing process by TSMC.

It also has 80 SMs with 40 TPCs for a total of 5,120 CUDA cores, compared to 3,585 CUDA cores on the GeForce GTX 1080 Ti. It can deliver 15 TFLOPS of FP32 compute performance and 7.5 TFLOPS of FP64 compute performance. All this is coupled with 16MB of cache and 16GB of HBM2 memory with 900GB/s of bandwidth on a 4,096-bit interface. And on top of it all, the Tesla V100 sports 640 Tensor cores that are specifically designed to speed up AI workloads. When enabling those cores, the Tesla V100 can push 120 TFLOPS of deep learning performance, which NVIDIA says is equivalent to 100 traditional CPUs. Compared to Pascal, the Tesla V100 offers a twelve-fold improvement in deep learning performance.

NVIDiA Tesla V100 and AI Researchers

One of the researchers on hand at NVIDIA's event was Silvio Savarese, an associate professor of computer science at Stanford University and director of the schools' SAIL-Toyota Center for AI Research. To him, receiving a signed box was like being gifted a bottle of fine wine.

"It's exciting, especially to get Jensen's signature," Savarese said. "My students will be even more excited."

This is good stuff from NVIDIA. Deep learning and AI are important fields that just about every major technology company is involved in. Companies are usually the ones to bask in the headlines when breakthroughs are made, but it is the researchers behind the scenes who make it happen. It is nice to see them acknowledged like this.