NVidia today announced what it calls “the world’s largest GPU,” the gold-painted and reflective GV100, undoubtedly a call to its ray-tracing target market. The Quadro GV100 combines 2x V100 GPUs via NVLink2, running 32GB of HBM2 per GPU and 10,240 CUDA cores. NVidia advertises 236 TFLOPS Tensor Cores in addition to the power afforded by the 10,240 CUDA cores.

Additionally, nVidia has upgraded its Tesla V100 products to 32GB, adding to the HBM2 stacks on the interposer. The V100 is nVidia’s accelerator card, primarily meant for scientific and machine learning workloads, and later gave way to the Titan V(olta). The V100 was the first GPU to use nVidia’s Volta architecture, shipping initially at 16GB – just like the Titan V – but with more targeted use cases. NVidia's first big announcement for GTC was to add 16GB VRAM to the V100, further adding a new “NV Switch” (no, not that one) to increase the coupling capabilities of Tesla V100 accelerators. Now, the V100 can be bridged with a 2-billion transistor switch, offering 18 ports to scale-up the GPU count per system.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge