[Due to the increasing size of the archives, each topic page now contains only the prior 365 days of content. Access to older stories is now solely through the Monthly Archive pages or the site search function.]
DOE to invest $16M in computational design of new materials for alt and renewable energy, electronics and other fields
August 17, 2016
The US Department of Energy will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers.
Two four-year projects—one team led by DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab), the other team led by DOE’s Oak Ridge National Laboratory (ORNL)—will leverage the labs’ expertise in materials and take advantage of lab supercomputers to develop software for designing fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. The research teams include experts from universities and other national labs.
Lawrence Livermore and IBM collaborate on new supercomputer based on TrueNorth neurosynaptic chip; accelerating path to exascale computing
March 29, 2016
Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth (earlier post), the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery—a mere 2.5 watts of power.
The technology represents a fundamental departure from the ~70-year-old von Neumann architecture underlying today’s computer design, and could be a powerful complement in the development of next-generation supercomputers able to perform at exascale speeds, 50 times (or two orders of magnitude) faster than today’s most advanced petaflop (quadrillion floating point operations per second) systems. Like the human brain, neurosynaptic systems require significantly less electrical power and volume.
NSF-funded supercomputing project to combine physics-based modeling with massive amounts of data
September 11, 2015
The National Science Foundation will provide $2.42 million to develop a unique facility for refining complex, physics-based computer models with big data techniques at the University of Michigan. The university will provide an additional $1.04 million. The focal point of the project will be a new computing resource, called ConFlux, which is designed to enable supercomputer simulations to interface with large datasets while running.
ConFlux will enable High Performance Computing (HPC) clusters to communicate seamlessly and at interactive speeds with data-intensive operations. The project establishes a hardware and software ecosystem to enable large scale data-driven modeling of multiscale physical systems.