Lawrence Livermore and IBM collaborate on new supercomputer based on TrueNorth neurosynaptic chip; accelerating path to exascale computing
29 March 2016
Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth (earlier post), the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery—a mere 2.5 watts of power.
The technology represents a fundamental departure from the ~70-year-old von Neumann architecture underlying today’s computer design, and could be a powerful complement in the development of next-generation supercomputers able to perform at exascale speeds, 50 times (or two orders of magnitude) faster than today’s most advanced petaflop (quadrillion floating point operations per second) systems. Like the human brain, neurosynaptic systems require significantly less electrical power and volume.
The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.
The new system will be used to explore new computing capabilities important to the National Nuclear Security Administration’s (NNSA) missions in cybersecurity, stewardship of the nation’s nuclear weapons stockpile and nonproliferation. NNSA’s Advanced Simulation and Computing (ASC) program will evaluate machine-learning applications, deep-learning algorithms and architectures and conduct general computing feasibility studies. ASC is a cornerstone of NNSA’s Stockpile Stewardship Program to ensure the safety, security and reliability of the nation’s nuclear deterrent without underground testing.
Neuromorphic computing opens very exciting new possibilities and is consistent with what we see as the future of the high performance computing and simulation at the heart of our national security missions. The potential capabilities neuromorphic computing represents and the machine intelligence that these will enable will change how we do science.
—Jim Brase, LLNL deputy associate director for Data Science
A single TrueNorth processor consists of 5.4 billion transistors wired together to create an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses. It consumes 70 milliwatts of power running in real time and delivers 46 giga synaptic operations per second—orders of magnitude lower energy than a conventional computer running inference on the same neural network.
TrueNorth was originally developed under the auspices of the Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, in collaboration with Cornell University.
Under terms of the $1 million contract, LLNL will receive a 16-chip TrueNorth system representing a total of 16 million neurons and 4 billion synapses. LLNL also will receive an end-to-end ecosystem to create and program energy-efficient machines that mimic the brain’s abilities for perception, action and cognition. The ecosystem consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.
Lawrence Livermore computer scientists will collaborate with IBM Research, partners across the Department of Energy complex and universities (link is external) to expand the frontiers of neurosynaptic architecture, system design, algorithms and software ecosystem.
Resources
-
Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha (2014) “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science 345 (6197), 668-673 doi: 10.1126/science.1254642
The effective use of 'True North' technology could represent a major step towards vastly improved future ADVs?
Lighter, faster, much less energy hungry computing capabilities will benefit all mobile devices.
Posted by: HarveyD | 29 March 2016 at 07:05 PM
In the short term, this doesn't change much. In the long term, this is HUGE. This is the positronic brain for your I Robot. This is the energy efficient system to interpret interface signals for cybernetic augmentations. This is the smartphone that can record, understand and tag all of your external and internal experiences for later replay. This pours through petabytes of unstructured data, forms 1 billion hypotheses, makes 1 billion predictions, observes the data awhile longer, and fundamentally advances our understanding of the natural, economic and social world. This could be very good or very bad, and will likely be both. Do I win the prize for spookiest speculation?
Posted by: HealthyBreeze | 30 March 2016 at 11:53 AM
@HB:
Speculation is healthy.
This may be one of many future vastly improved chips to very quickly handle/compute visual-position-movement data for autonomous robots-vehicles-planes-drones-ships etc, using minimum energy.
Very low energy consumption computers and sensors would 2X and 3X for improved redundancy, safety and security, to do much better than current human drivers, pilots etc.
Posted by: HarveyD | 03 April 2016 at 11:25 AM