Painting of Alessandro Volta, eponym of architecture

Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was Nvidia's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.

The first graphics card to use it was the datacenter Tesla V100, e.g. as part of the Nvidia DGX-1 system. It has also been used in the Quadro GV100 and Titan V. There were no mainstream GeForce graphics cards based on Volta.

After two USPTO proceedings, on July 3, 2023 Nvidia lost the Volta trademark application in the field of artificial intelligence. The Volta trademark owner remains Volta Robots, a company specialized in AI and vision algorithms for robots and unmanned vehicles.

Details

Architectural improvements of the Volta architecture include the following:

  • CUDA Compute Capability 7.0 concurrent execution of integer and floating point operations
  • TSMC's 12 nm FinFET process, allowing 21.1billion transistors.
  • High Bandwidth Memory 2 (HBM2),
  • NVLink 2.0: a high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide 25 Gbit/s per lane. (Disabled for Titan V)
  • Tensor cores: A tensor core is a unit that multiplies two 4×4 FP16 matrices, and then adds a third FP16 or FP32 matrix to the result by using fused multiply–add operations, and obtains an FP32 result that could be optionally demoted to an FP16 result. Tensor cores are intended to speed up the training of neural networks. Volta's Tensor cores are first generation while Ampere has third generation Tensor cores.
  • PureVideo Feature Set I hardware video decoding

Comparison of Compute Capability: GP100 vs GV100 vs GA100

GPU featuresNvidia Tesla P100Nvidia Tesla V100Nvidia A100
GPU codenameGP100GV100GA100
GPU architectureNvidia PascalNvidia VoltaNvidia Ampere
Compute capability6.07.08.0
Threads / warp323232
Max warps / SM646464
Max threads / SM204820482048
Max thread blocks / SM323232
Max 32-bit registers / SM655366553665536
Max registers / block655366553665536
Max registers / thread255255255
Max thread block size102410241024
FP32 cores / SM646464
Ratio of SM registers to FP32 cores102410241024
Shared Memory Size / SM64 KBConfigurable up to 96 KBConfigurable up to 164 KB

Comparison of Precision Support Matrix

Supported CUDA Core PrecisionsSupported Tensor Core Precisions
FP16FP32FP64INT1INT4INT8TF32BF16FP16FP32FP64INT1INT4INT8TF32BF16
Nvidia Tesla P4NoYesYesNoNoYesNoNoNoNoNoNoNoNoNoNo
Nvidia P100YesYesYesNoNoNoNoNoNoNoNoNoNoNoNoNo
Nvidia VoltaYesYesYesNoNoYesNoNoYesNoNoNoNoNoNoNo
Nvidia TuringYesYesYesNoNoNoNoNoYesNoNoYesYesYesNoNo
Nvidia A100YesYesYesNoNoYesNoYesYesNoYesYesYesYesYesYes

Legend:

Comparison of Decode Performance

Concurrent streamsH.264 decode (1080p30)H.265 (HEVC) decode (1080p30)VP9 decode (1080p30)
V100162222
A10075157108

Products

Volta has been announced as the GPU microarchitecture within the Xavier generation of Tegra SoC focusing on self-driving cars.

At Nvidia's annual GPU Technology Conference keynote on May 10, 2017, Nvidia officially announced the Volta microarchitecture along with the Tesla V100. The Volta GV100 GPU is built on a 12 nm process size using HBM2 memory with 900 GB/s of bandwidth.

Nvidia officially announced the Nvidia TITAN V on December 7, 2017.

Nvidia officially announced the Quadro GV100 on March 27, 2018.

ModelLaunchCode Name (s)Fab (nm)Transistors (billion)Die size (mm2)Bus InterfaceCore configSM CountGraphics Processing ClustersL2 Cache Size (MiB)Clock speedsFillrateMemoryProcessing power (GFLOPS)TDP (Watts)NVLink SupportLaunch Price (USD)
CUDA coreTensor coreBase core clock (MHz)Boost clock (MHz)Memory (MT/s)Pixel (GP/s)Texture (GT/s)Size (GiB)Bandwidth (GB/s)Bus TypeBus width (bit)Single precision (boost)Double precision (boost)Half precision (boost)
MSRP
Nvidia Titan VDecember 7, 2017GV100-400-A1TSMC 12 nm21.1815PCIe 3.0 ×165120:320:966408064.5120014551700139.7465.612652.8HBM2307212288 (14899)6144 (7450)24576 (29798)250No$2,999
Nvidia Quadro GV100March 27, 2018GV1005120:320:1286113216281696208.452132868.4409611592 (16671)5796 (8335)23183 (33341)Yes$8,999
Nvidia Titan V CEO EditionJune 21, 2018120014551700186.2465.6870.412288 (14899)6144 (7450)24576 (29798)N/A

Application

Volta is also reported to be included in the Summit and Sierra supercomputers, used for GPGPU compute. The Volta GPUs will connect to the POWER9 CPUs via NVLink 2.0, which is expected to support cache coherency and therefore improve GPGPU performance.

V100 accelerator and DGX V100

Comparison of accelerators used in DGX:

ModelArchitectureSocketFP32 CUDA coresFP64 cores (excl. tensor)Mixed INT32/FP32 coresINT32 coresBoost clockMemory clockMemory bus widthMemory bandwidthVRAMSingle precision (FP32)Double precision (FP64)INT8 (non-tensor)INT8 dense tensorINT32FP4 dense tensorFP16FP16 dense tensorbfloat16 dense tensorTensorFloat-32 (TF32) dense tensorFP64 dense tensorInterconnect (NVLink)GPUL1 CacheL2 CacheTDPDie sizeTransistor countProcessLaunched
P100PascalSXM/SXM235841792N/AN/A1480 MHz1.4 Gbit/s HBM24096-bit720 GB/sec16 GB HBM210.6 TFLOPS5.3 TFLOPSN/AN/AN/AN/A21.2 TFLOPSN/AN/AN/AN/A160 GB/secGP1001344 KB (24 KB × 56)4096 KB300 W610 mm215.3 BTSMC 16FF+Q2 2016
V100 16GBVoltaSXM251202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec16 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB300 W815 mm221.1 BTSMC 12FFNQ3 2017
V100 32GBVoltaSXM351202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec32 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB350 W815 mm221.1 BTSMC 12FFN
A100 40GBAmpereSXM4691234566912N/A1410 MHz2.4 Gbit/s HBM25120-bit1.52 TB/sec40 GB HBM219.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC N7Q1 2020
A100 80GBAmpereSXM4691234566912N/A1410 MHz3.2 Gbit/s HBM2e5120-bit1.52 TB/sec80 GB HBM2e19.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC N7
H100HopperSXM516896460816896N/A1980 MHz5.2 Gbit/s HBM35120-bit3.35 TB/sec80 GB HBM367 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/A990 TFLOPS990 TFLOPS495 TFLOPS67 TFLOPS900 GB/secGH10025344 KB (192 KB × 132)51200 KB700 W814 mm280 BTSMC 4NQ3 2022
H200HopperSXM516896460816896N/A1980 MHz6.3 Gbit/s HBM3e6144-bit4.8 TB/sec141 GB HBM3e67 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/A990 TFLOPS990 TFLOPS495 TFLOPS67 TFLOPS900 GB/secGH10025344 KB (192 KB × 132)51200 KB1000 W814 mm280 BTSMC 4NQ3 2023
B100BlackwellSXM6N/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A3.5 POPSN/A7 PFLOPSN/A1.98 PFLOPS1.98 PFLOPS989 TFLOPS30 TFLOPS1.8 TB/secGB100N/AN/A700 WN/A208 BTSMC 4NPQ4 2024
B200BlackwellSXM6N/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A4.5 POPSN/A9 PFLOPSN/A2.25 PFLOPS2.25 PFLOPS1.2 PFLOPS40 TFLOPS1.8 TB/secGB100N/AN/A1000 WN/A208 BTSMC 4NP

See also

External links