Ford building new $1.6B plant in Mexico for small cars
MAN Diesel & Turbo delivers 1st IMO-certified two-stroke with Tier III NOx control, EGR systems

NVIDIA introduces new Tesla P100 GPU accelerator for deep learning, HPC applications; NVIDIA DGX-1 deep learning supercomputer

At the NVIDIA GPU Technology Conference 2016, the company introduced the NVIDIA Tesla P100 GPU, the most advanced hyperscale data center accelerator ever built. The latest addition to the NVIDIA Tesla Accelerated Computing Platform, the Tesla P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.

NVIDIA also unveiled the NVIDIA DGX-1, the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence, built on eight Tesla P100 GPU accelerators.

Tesla P100. Today’s data centers are inefficient at next-generation artificial intelligence and scientific applications, which require ultra-efficient, lightning-fast server nodes. Based on the new NVIDIA Pascal GPU architecture with five breakthrough technologies, the Tesla P100 delivers unmatched performance and efficiency to power the most computationally demanding applications.

NVIDIA_Tesla_P100_GPU_topfrontangle4
  1. NVIDIA Pascal architecture for exponential performance leap. A Pascal-based Tesla P100 solution delivers over a 12x increase in neural network training performance compared with a previous-generation NVIDIA Maxwell-based solution.

  2. NVIDIA NVLink for maximum application scalability. The NVIDIA NVLink high-speed GPU interconnect scales applications across multiple GPUs, delivering a 5x acceleration in bandwidth compared to today’s best-in-class solution. (NVLink delivers 160GB/sec of bi-directional interconnect bandwidth, compared to PCIe x16 Gen3 that delivers 31.5GB/sec of bi-directional bandwidth.)

    Up to eight Tesla P100 GPUs can be interconnected with NVLink to maximize application performance in a single node, and IBM has implemented NVLink on its POWER8 CPUs for fast CPU-to-GPU communication.

  3. 16nm FinFET for unprecedented energy efficiency. With 15.3 billion transistors built on 16 nanometer FinFET fabrication technology, the Pascal GPU is the world’s largest FinFET chip ever built. It is engineered to deliver the fastest performance and best energy efficiency for workloads with near-infinite computing needs.

  4. CoWoS with HBM2 for big data workloads. The Pascal architecture unifies processor and data into a single package to deliver unprecedented compute efficiency. An innovative approach to memory design, Chip on Wafer on Substrate (CoWoS) with HBM2, provides a 3x boost in memory bandwidth performance, or 720GB/sec, compared to the Maxwell architecture.

  5. New AI algorithms for peak performance. New half-precision instructions deliver more than 21 teraflops of peak performance for deep learning.

The Tesla P100 GPU accelerator delivers a new level of performance for a range of HPC (high performance computing) and deep learning applications, including the AMBER molecular dynamics code, which runs faster on a single server node with Tesla P100 GPUs than on 48 dual-socket CPU server nodes.

Training the popular AlexNet deep neural network would take 250 dual-socket CPU server nodes to match the performance of eight Tesla P100 GPUs. The widely used weather forecasting application COSMO runs faster on eight Tesla P100 GPUs than on 27 dual-socket CPU servers.

The first accelerator to deliver more than 5 and 10 teraflops of double-precision and single-precision performance, respectively, the Tesla P100 provides a giant leap in processing capabilities and time-to-discovery for research across a broad spectrum of domains.

Specifications of the Tesla P100 GPU accelerator include:

  • 5.3 teraflops double-precision performance, 10.6 teraflops single-precision performance and 21.2 teraflops half-precision performance with NVIDIA GPU BOOST technology

  • 160GB/sec bi-directional interconnect bandwidth with NVIDIA NVLink

  • 16GB of CoWoS HBM2 stacked memory

  • 720GB/sec memory bandwidth with CoWoS HBM2 stacked memory

  • Enhanced programmability with page migration engine and unified memory

  • ECC protection for increased reliability

  • Server-optimized for highest data center throughput and reliability

NVIDIA DGX-1. NVIDIA designed the DGX-1 for a new computing model to power the AI revolution that is sweeping across science, enterprises and increasingly all aspects of daily life. Powerful deep neural networks are driving a new kind of software created with massive amounts of data, which require considerably higher levels of computational performance.

The NVIDIA DGX-1 system includes a complete suite of optimized deep learning software that allows researchers and data scientists to train deep neural networks quickly and easily.

The DGX-1 software includes the NVIDIA Deep Learning GPU Training System (DIGITS), a complete, interactive system for designing deep neural networks (DNNs). It also includes the newly released NVIDIA CUDA Deep Neural Network library (cuDNN) version 5, a GPU-accelerated library of primitives for designing DNNs.

It also includes optimized versions of several widely used deep learning frameworks—Caffe, Theano and Torch. The DGX-1 additionally provides access to cloud management tools, software updates and a repository for containerized applications.

The NVIDIA DGX-1 system specifications include:

  • Up to 170 teraflops of half-precision (FP16) peak performance
  • Eight Tesla P100 GPU accelerators, 16GB memory per GPU
  • NVLink Hybrid Cube Mesh
  • 7TB SSD DL Cache
  • Dual 10GbE, Quad InfiniBand 100Gb networking
  • 3U - 3200W

General availability for the Pascal-based NVIDIA Tesla P100 GPU accelerator in the new NVIDIA DGX-1 deep learning system is in June. It is also expected to be available beginning in early 2017 from leading server manufacturers.

Comments

Floatplane

Other than having the word "Tesla" in the product name, I don't see why this article is relevant to the green car congress web site. Interesting though it is.

HarveyD

Wonder how these compare with new IBM very low power usage chips. Will INTEL and others follow soon?

Tomorrow's ADVs will need fully redundant ultra fast computing units to interpret a multitude of fixed-moving objects position data from moving sensors and to quickly formulate appropriate drive commands to the vehicle.

Much development works remain to be done.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)