Can smart grid devices help build a supercomputer?

Smart meters aggregated into supercomputer
Hive Computing proposes working with utilities to leverage spare processing capacity from smart meters to create a supercomputer for high levels of data analysis

A US tech startup has publicised its vision of aggregating smart meters into a low-cost super computer.

In an interview with Forbes magazine, Hive Computing Inc. explains the concept of meshing the spare capacity of smart meters to create a networked supercomputer that can be using for system modelling or encryption data analysis.

Eric Frazier, co-founder of the startup company, explained to Forbes that every smart meter has the computing capability of a cellphone.

Every few minutes these meters send usage data to the utility, said Mr Frazier.

“However, 99% of the time they just sit idle. But if you could leverage the idle time of all of these devices, you would have thousands of central processing units with multiple cores available for parallel processing.”

Frazier indicates that 10,000 meters puts you into the range of a supercomputer with a teraflop (1 trillion floating operations per second) of processing power.

A million meters represents two petaflops (2 quadrillion operations per second) of capacity.

Super computer platform

Raiford Smith, vice president, Corporate Development and Planning at CPS Energy in San Antonio, thinks the idea has promise: “With the advent of the Internet of Things and the smart grid, utilities are deploying millions of intelligent devices – devices which, when interconnected, can form the basis of an impressive super computing platform.

“In fact, a 1 million meter deployment would be the equivalent of the world’s 20th fastest super computer. This represents an opportunity to do something good for society by growing low-cost computing capability while giving utilities an avenue to further invest in their metering infrastructure.”

As Frazier notes, the US alone currently has over 50 million meters, which would equal 100 petaflops.
To put that in perspective, the largest US machine, which is the Cray Gemini at Oak Ridge is about half the size of the world’s largest computer, the Tianhe supercomputer in Guanzhou, China, which currently boasts 33.8 petaflops of maximal achieved performance (and draws 18 MW of power).

Utility business model

Frazier explained to Forbes that the company would look to work with utilities and their customers.

Another approach would be to approach research institutions and technology-focused business that need high performance computing capability at a price lower than that offered in today’s market.

Supercomputer pilot in 2015

Developer Dr. Elias Gonzalez said the company is a few months away from rolling out its first pilot in the field and has already generated a high level of interest.

“We have talked to both meter manufacturers and the utilities that might help us pilot some of these programs… Ultimately we want to roll out with some utilities on their meters.

“We have some utilities that have said ‘we are ready, you tell us when you are ready.’ Everybody that we have talked to views this as a low hanging fruit in terms of adding significant value to their smart grid investment.”