High Performance Computing
[Due to the increasing size of the archives, each topic page now contains only the prior 365 days of content. Access to older stories is now solely through the Monthly Archive pages or the site search function.]
DOE HPC4Mfg program funds 13 projects to advance US manufacturing; welding, Li-S batteries among projects
August 31, 2016
A US Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance US manufacturing has funded 13 new industry projects for a total of $3.8 million. Among the projects selected are one by GM and EPRI of California to improve welding techniques for automobile manufacturing and power plant builds in partnership with Oak Ridge National Laboratory (ORNL).
Another one of the 13 projects is led by Sepion Technologies, which will partner with LBNL to make new membranes to increase the lifetime of Li-S batteries for hybrid airplanes.
Argonne VERIFI team improves code to enable up to 10K simultaneous engine simulations; paradigm shift in engine design
April 09, 2016
A team of scientists and engineers with the Virtual Engine Research Institute and Fuels Initiative (VERIFI) (earlier post) at the US Department of Energy’s Argonne National Laboratory recently completed development of engineering simulation code and workflows that will allow as many as 10,000 engine simulations to be conducted simultaneously on Argonne’s supercomputer, Mira.
These simulations are typical “engineering-type” smaller scale simulations, which are used routinely for engine design within industry. This massive simulation capacity has opened up a new capability for industrial partners seeking new advanced engine designs.
U-Michigan, IBM collaborate on data-centric high performance computing system
April 07, 2016
The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems designed to increase the pace of scientific discovery in fields as diverse as aircraft and rocket engine design, cardiovascular disease treatment, materials physics, climate modeling and cosmology.
The system is designed to enable high performance computing applications for physics to interact, in real time, with big data in order to improve scientists’ ability to make quantitative predictions. IBM’s systems use a GPU-accelerated, data-centric approach, integrating massive datasets seamlessly with high performance computing power, resulting in new predictive simulation techniques that promise to expand the limits of scientific knowledge.
NVIDIA introduces new Tesla P100 GPU accelerator for deep learning, HPC applications; NVIDIA DGX-1 deep learning supercomputer
April 06, 2016
At the NVIDIA GPU Technology Conference 2016, the company introduced the NVIDIA Tesla P100 GPU, the most advanced hyperscale data center accelerator ever built. The latest addition to the NVIDIA Tesla Accelerated Computing Platform, the Tesla P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.
NVIDIA also unveiled the NVIDIA DGX-1, the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence, built on eight Tesla P100 GPU accelerators.
Lawrence Livermore and IBM collaborate on new supercomputer based on TrueNorth neurosynaptic chip; accelerating path to exascale computing
March 29, 2016
Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth (earlier post), the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery—a mere 2.5 watts of power.
The technology represents a fundamental departure from the ~70-year-old von Neumann architecture underlying today’s computer design, and could be a powerful complement in the development of next-generation supercomputers able to perform at exascale speeds, 50 times (or two orders of magnitude) faster than today’s most advanced petaflop (quadrillion floating point operations per second) systems. Like the human brain, neurosynaptic systems require significantly less electrical power and volume.