Tue, 08/06/2013 - 3:24pm

Starting this fall, PCI will host a new five-year, $16 million, DOE/NNSA funded Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC). Our goal is to use state-of-the-art parallel computing techniques to access forthcoming computing platforms in order to fundamentally advance combustion technology via simulation-based predictive science. PCI was established with just such objectives: To facilitate such bridges from computing platforms to advanced applications needs.

Thu, 05/16/2013 - 2:11pm

Support for system-driven partitioning has been added to Charm++ in the latest stable version 6.5.0.

Mon, 01/07/2013 - 1:19pm

This article originally appeared in The Exascale Report.

Everyone reading this is a believer in the power of computing. We take for granted that the computing power of the highest performing systems needs to continue to grow at the same rate in order to meet the needs of society. Yet this is not obvious to others.

Mon, 04/02/2012 - 1:40pm

I like the quote “Parallel programming can be easy if you don’t care about performance.” What makes it hard is when we want performance in the face of limited memory bandwidth. This is quite true in dense matrix linear algebra. Most of the complexity in parallel programming has to do with managing the timing of using data so that each data element fetched from DRAM is used multiple times in order to conserve DRAM bandwidth. Historically, expert programmers have used tiling/blocking techniques to achieve such re-use effect in linear algebra libraries.

Fri, 03/02/2012 - 2:57pm

As we seek to develop parallel applications, we must understand that such development presents at least three major challenges. I argue that all these challenges are equally present whether one programs a many-core GPU, an MIC, or a multi-core CPU. Unfortunately, there is little compiler technology that can help programmers to meet these challenges today. These challenges are the reasons why compiler-based solutions from vendors will have limited success in creating a scalable parallel code base for many applications.

Fri, 02/24/2012 - 2:11pm

Blowing now toward the south, then toward the north, the wind turns again and again, resuming its rounds. What has been, that will be; what has been done, that will be done. Nothing is new under the sun. -- Ecclesiastes

Wed, 02/08/2012 - 8:48am

Over the last two weeks, Cray has begun the installation of the Blue Waters supercomputer at the National Center for Supercomputing Applications at Illinois (see for live images of the installation).

Thu, 02/02/2012 - 2:11pm

The peach in Figure 1 depicts the level of difficulty in covering applications with multicore CPU architectures and manycore GPU architectures. The core of the cartoon peach represents the sequential portions of the applications. For various reasons, these portions cannot be expressed into multi-threaded code. These sequential portions have been the target of modern instruction-level parallelism techniques that wring limited amount of parallelism out of these portions.

Mon, 12/19/2011 - 4:13pm

Parallel programming for the masses -- why would you want that?  Parallelism is primarily a means to an end – an approach that harnesses the power of many to solve one problem. It is true that many activities are intrinsically concurrent, and our current programming languages often artificially impose a serialization (or at least a serial order).  But most users don’t care about parallelism – they want a clear, easy way to harness computers to do what they want.

Wed, 11/30/2011 - 12:58pm

Supercomputers need to cut their power bills. Today’s supercomputers consume between 1 and 4 watts of electricity for each Giga FLOPS (Floating Point Operations Per Second) of peak calculation capability. The leading supercomputers in 2012 will have about 10 Peta FLOPS of peak calculation capability, which translates into 10 to 40 megawatts. With electrical power costing about $1 a year per watt, these machines will rack up power bills ranging from $10 million to $40 million annually.