Green Car Congress
Home Topics Archives About Contact  RSS Headlines
Google search


Exascale computing

[Due to the increasing size of the archives, each topic page now contains only the prior 365 days of content. Access to older stories is now solely through the Monthly Archive pages or the site search function.]

NSF-funded supercomputing project to combine physics-based modeling with massive amounts of data

September 11, 2015

The National Science Foundation will provide $2.42 million to develop a unique facility for refining complex, physics-based computer models with big data techniques at the University of Michigan. The university will provide an additional $1.04 million. The focal point of the project will be a new computing resource, called ConFlux, which is designed to enable supercomputer simulations to interface with large datasets while running.

ConFlux will enable High Performance Computing (HPC) clusters to communicate seamlessly and at interactive speeds with data-intensive operations. The project establishes a hardware and software ecosystem to enable large scale data-driven modeling of multiscale physical systems.

More... | Comments (0)

Obama orders creation of National Strategic Computing Initiative; delivering exascale computing

July 31, 2015

President Obama issued an Executive Order establishing the National Strategic Computing Initiative (NSCI). The NSCI is a whole-of-government effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of high-performance computing (HPC) for the United States. One of the specific objectives is accelerating the delivery of exascale computing. (Earlier post.)

The coordinated Federal strategy is to be guided by four principles: deploying and applying new HPC technologies broadly for economic competitiveness and scientific discovery; fostering public-private collaboration; cooperaation among all executive departments and agencies with significant expertise or equities in HPC while also collaborating with industry and academia; and developing a comprehensive technical and scientific approach to transition HPC research on hardware, system software, development tools, and applications efficiently into development and, ultimately, operations.

More... | Comments (1)

Intel and Micron begin production on new breakthrough class of non-volatile memory; 3D Xpoint memory speeds up to 1000 faster than NAND

July 29, 2015

Intel Corporation and Micron Technology, Inc. unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionize any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash memory in 1989.

The explosion of connected devices and digital services is generating massive amounts of new data. To make this “big data” useful, it must be stored and analyzed quickly, creating challenges for service providers and system builders who must balance cost, power and performance trade-offs when they design memory and storage solutions. 3D XPoint technology combines the performance, density, power, non-volatility and cost advantages of all available memory technologies on the market today, the partners said. The technology is up to 1,000 times faster and has up to 1,000 times greater endurance than NAND, and is 10 times denser than conventional memory.

More... | Comments (19)

Argonne researchers develop macroscale superlubricity system with help of Mira supercomputer; potential for “lubricant genome”

July 22, 2015

Argonne scientists have used the Mira supercomputer to identify and to improve a new mechanism for eliminating friction, which fed into the development of a hybrid material that exhibited superlubricity—a state in which friction essentially disappears—at the macroscale—i.e., at engineering scale—for the first time. A paper on their work was published in the journal Science.

They showed that superlubricity can be realized when graphene is used in combination with nanodiamond particles and diamond-like carbon (DLC). Simulations showed that sliding of the graphene patches around the tiny nanodiamond particles led to nanoscrolls with reduced contact area that slide easily against the amorphous diamond-like carbon surface, achieving incommensurate contact and a substantially reduced coefficient of friction (~0.004).

More... | Comments (1)

Sandia RAPTOR turbulent combustion code selected for next-gen Summit supercomputer readiness project

May 28, 2015

RAPTOR, a turbulent combustion code developed by Sandia National Laboratories mechanical engineer Dr. Joseph Oefelein, was selected as one of 13 partnership projects for the Center for Accelerated Application Readiness (CAAR). CAAR is a US Department of Energy program located at the Oak Ridge Leadership Computing Facility and is focused on optimizing computer codes for the next generation of supercomputers.

Developed at Sandia’s Combustion Research Facility, RAPTOR, a general solver optimized for Large Eddy Simulation (LES, a mathematical model for turbulence), is targeted at transportation power and propulsion systems. Optimizing RAPTOR for Summit’s hybrid architecture will enable a new generation of high-fidelity simulations that identically match engine operating conditions and geometries. Such a scale will allow direct comparisons to companion experiments, providing insight into transient combustion processes such as thermal stratification, heat transfer, and turbulent mixing.

More... | Comments (0)

Argonne supercomputer helped Rice/Minnesota team identify materials to improve fuel production

April 29, 2015

Scientists at Rice University and the University of Minnesota recently identified, through a large-scale, multi-step computational screening process, promising zeolite structures for two fuel applications: purification of ​ethanol from fermentation broths and the hydroisomerization of alkanes with 18–30 carbon atoms encountered in petroleum refining. (Earlier post.)

To date, more than 200 types of zeolites have been synthesized and more than 330,000 potential zeolite structures have been predicted based on previous computer simulations. With such a large pool of candidate materials, using traditional laboratory methods to identify the optimal zeolite for a particular job presents a time- and labor-intensive process that could take decades. The researchers used Mira, the Argonne Leadership Computing Facility’s (ALCF) 10-petaflops IBM Blue Gene/Q supercomputer, to run their large-scale, multi-step computational screening process.

More... | Comments (0) | TrackBack (0)

DOE investing $200M in next-gen supercomputer for Argonne; on the road to exascale computing

April 10, 2015

Under the joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative, the US Department of Energy (DOE) will invest $200 million to deliver a next-generation supercomputer—Aurora—to the Argonne Leadership Computing Facility (ALCF). When commissioned in 2018, this supercomputer will be open to all scientific users.

The new system, Aurora, will use Intel’s HPC (high performance computing) scalable system framework to provide a peak performance of 180 PetaFLOP/s. Aurora, in effect a “pre-exascale” system, will be delivered in 2018. Argonne and Intel will also provide an interim system, the 8.5 PetaFLOP Theta, to be delivered in 2016, which will help Argonne Leadership Computing Facility (ALCF) users transition their applications to the new technology. (Theta will require only 1.7 MW of power.)

More... | Comments (0) | TrackBack (0)

Green Car Congress © 2015 BioAge Group, LLC. All Rights Reserved. | Home | BioAge Group