Tesla P100 GPU Release Date: Nvidia announces Pascal-based Tesla P100 GPU

By Ajay Kadkol - 23 Jun '16 12:00PM
Close

Nvidia has launched its Pascal architecture-based Tesla P100 GPU accelerator for PCIe servers. The company claims that the Tesla P100 GPU accelerator "delivers massive leaps in performance and value compared with CPU-based systems."It enables the creation of "super nodes" that provide the throughput of more than 32 commodity CPU-based nodes and promises to deliver up to 70% lower capital and operational costs.

NVidia this year has gifted the entire PC community with state of the art graphic cards which deliver three times the performance of its predecessor with only 0.5x price hike. This is something which gamers can rejoice widely due to the new graphical architecture which empowers not only games but also most VR developments.

"Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains. Dramatically scaling performance with fewer, more powerful Tesla P100-powered nodes puts more dollars into computing instead of vast infrastructure overhead," said Ian Buck, vice president of accelerated computing at Nvidia.

The Tesla P100 for PCIe is available in a standard PCIe form factor and is compatible with commonly-found GPU-accelerated servers. It is optimized to power the most computationally-intensive AI and HPC data center applications. A single Tesla P100-powered server delivers higher performance than 50 CPU-only server nodes when running the AMBER molecular dynamics code, and is faster than 32 CPU-only nodes when running the VASP material science application.

On the performance front, it is capable of delivering 4.7 teraflops and 9.3 teraflops of double-precision and single-precision peak performance, respectively. A single Pascal-based Tesla P100 node provides the equivalent performance of more than 32 commodity CPU-only servers.The Tesla P100 unifies processor and data into a single package for more efficiency.

"An innovative approach to memory design - chip on wafer on substrate (CoWoS) with HBM2 - provides a 3x boost in memory bandwidth performance, or 720GB/sec, compared to the Nvidia Maxwell architecture," explained the company.

The accelerator is expected to be available from fourth quarter of 2016 via Nvidia reseller partners and server manufacturers, including Cray, Dell, Hewlett Packard Enterprise, IBM and SGI.

Fun Stuff

Join the Conversation

The Next Read

Real Time Analytics