Each of the 18 ports runs at 50GB per port, with nVidia citing just under 1TB/s bandwidth. The ports are fully connected and can communicate between each device connected, enabling more efficient deep learning and machine learning data processing. NVidia noted that, in the instance of oil and gas industry signal processing, 1k x 1k x 1k FFTs are now about 50% faster, coupled with lower error rates. The company also noted easy virtualization and scaling for users that may be incapable of fully engaging all 18 ports on the switch.
Related to this, nVidia has announced that its DGX-1 has been upgraded with 32GB GPUs.
The V100 supports 300GB bandwidth for training. Aside from the memory overhaul, the rest of the architecture and GPU remain the same. There is no change to the rest of the GPU. It is entirely a 2x memory capacity upgrade, with the NV Switch shipping as a separate and new product.
NVidia's combined GPU features 16 Tesla V100s with 32GB each, all connected by the NVSwitch, capable of leveraging 512GB HBM2 and 14.4TB/sec aggregate bandwidth across 81,920 CUDA cores and 2,000 TFLOPs combined Tensor cores.
There should be more GTC coverage as we go at the event.
Editorial: Steve Burke