Most people just don’t understand the scale of building a supercomputer. Supercomputer? That’s just a big computer right?

In an article over on hpcwire Dave Turek (IBM Deep Computing) made a comment that sticks in my head. When building an exaflop system, 1 quintillion or 1 with 18 zeros, flops, or FLoating point Operations Per Second, think 1+1=2 as a flop. (none built yet, but planning in the near future). Using current designs, the memory system alone, think the RAM in your home computer, would require roughly 80 Megawatts of power. The equivalant of  1.5 Million light bulbs burning at the same time. I guess that would be enough lighting for more than 100,000 homes or so. Just for the memory.

An IBM Power7 MCM (MultiChipModule), think 4 CPU’s, pulls about 800 watts of power per Teraflop of performance, and yes, that’s good, really good compared to say your PC at home. So, 800 watts per Teraflop. 1000 Teraflops per Petaflop, 1000 Petaflops per Exaflop, or roughly 800 Megawatts for the CPU’s. We’re up to 880 Megawatts so far for memory and cpu’s and haven’t even turned on any disk space, cooling, facilities, networking, lights, infrastructure, etc, etc, etc,

So, frankly if your datacenter doesn’t have at least a Gigawatt of power free and clear, well, don’t bother trying for the next generation of supercomputers. Anyone got a spare Nuclear plant laying around?

And let’s not even get into what I’ll be charging to get the thing working. (-;