Introducing XPACC: The Center for Exascale Simulation of Plasma-Coupled Combustion
Starting this fall, PCI will host a new five-year, $16 million, DOE/NNSA funded Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC). Our goal is to use state-of-the-art parallel computing techniques to access forthcoming computing platforms in order to fundamentally advance combustion technology via simulation-based predictive science. PCI was established with just such objectives: To facilitate such bridges from computing platforms to advanced applications needs.
Plasmas offer a unique and untapped means of controlling turbulent combustion to boost performance and efficiency in a range of applications. Radicals produced in plasmas accelerate burning by short-circuiting standard chemical pathways; electric fields affect flame stability by accelerating charged chemical species within thin flame fronts; and plasma Joule heating affects both flow via thermal expansion and chemistry via temperature.
Each of the physics components of the coupled system––the flow turbulence, plasma physics, reaction pathways, and electrode aging and electrodynamics––would generally require petascale computational resources for reliable physics-based predictions. Coupling them across all the important length and time scales to make quality predictions of plasma-coupled combustion demands the co-development of simulation models with tools to harness the heterogeneous architectures of anticipated exascale computing platforms. This is the goal of our new Center.
We realize this goal will require tools to cope with the challenges presented by forthcoming systems. Clock rates and related power consumption limitations will lead to substantially slower, simpler, and heterogeneous processing elements with less memory each. System scale will necessitate resiliency to faults, and heterogeneity will necessitate special approaches for efficient utilization. We will develop tools that combine autotuning for individual processors, adaptive methods for effective load balancing and fault tolerance, user-abstracted tiling data structures that increase locality and facilitate load balancing, and techniques for exploiting heterogeneous processing elements, all in order to provide solutions that will scale. These will be used to implement numerical discretization algorithms selected specifically to facilitate the move to exascale systems. Different physical models generally lead to discretizations that will be better suited to different programming models and hardware sub-architectures. Recognizing this and building tools from this perspective will increase their utility, both for the proposed plasma-combustion application and more broadly.
XPACC will be co-directed by myself, leading the predictive science efforts, and Prof. Bill Gropp, the overall center PI and in charge of integrating the physics application with exascale tools. The breadth of our physics and computing needs will require input from around the College of Engineering at Illinois, with faculty from Computer Science, Mechanical Science & Engineering, Aerospace Engineering and Electrical & Computer Engineering contributing to the Center.