Utilities house enormous datasets that defy traditional analysis, for which machine-learning could be of great benefit. When machine-learning is applied to IoT data, utility companies are able to realise the next generation power grid that can eventually handle billions of endpoints on utility networks autonomously. Pacific Gas and Electric’s (PG&E) emerging technologies leader Tom Martin and Paul Doherty, corporate relations, discuss how machine learning and data science is being leveraged for asset maintenance and the integration of distributed energy resources (DER).
MSEI: What does machine-learning mean to PG&E? What is your definition of machine-learning?
TM: Machine-learning at PG&E is the ability to use analytics to drive optimisation in our operations. We view the grid as becoming increasingly complex, as is PG&E’s grid. We have millions of smart meters; hundreds of thousands of rooftop solar installations that will very soon have a controllable output; and electric vehicles (EVs) that can charge or discharge, based on different market signals. As the grid becomes more complex, we are trying to understand how our operators and operating engineers can use data gathered from these devices, to be predictive and prescriptive, in ways that we can address the complexity and volume of decisions associated with managing tremendous volumes of distributed resources – on a scale that humans wouldn’t be able to. PG&E has more private solar than any other utility in the US, with more than 300,000 private solar customers that are connected to the grid. Furthermore, we connect about 4,000 to 6,000 new solar customers to the grid monthly, which equates to about one every seven minutes. Similarly, one in five electric vehicles in the US is registered in PG&E’s service area. There are more than 200,000 EVs in the state of California and over 85,000 are in PG&E’s service area.
MSEI: How are you using machine learning and big data for asset maintenance/asset management?
TM: Right now, we are beginning the journey for better leveraging big data. One of the projects that we have underway is called ‘STAR’ (System Tool for Asset Risk). The objective behind project STAR is to determine how we can better prioritise asset replacement and asset maintenance using the rich data sources we have; and to create a dynamic risk scoring model, which pulls in more data sources than were previously available and previously digestible. The risk model allows our planners and our asset maintenance team to use this data to build a risk score, which will determine which assets need to be replaced in order of highest priority.
MSEI: Which assets are you looking to apply the STAR risk model to?
TM: There’s a wide variety – everything from identifying which power poles are most likely to need replacement, through to our giant transformers and substation equipment. These are the two ends of the spectrum and includes everything in between. We’re starting to focus on a limited number of asset groups, but the idea is that we have this wealth of data and there’s a huge opportunity to leverage that data to optimise decision making.
MSEI: What is needed to create a good machine-learning system? Is the risk model built on software that uses machine-learning algorithms? How is this made up?
TM: We are in the process of answering that exact question. In its simplest form, the answer is we need a platform that can integrate a wide variety of data sources: not just utility-owned data (eg asset location, asset type, smart meter data), but also external data sources such as weather patterns, customer-sited solar input and so forth. The goal here is to build a platform that allows data science and operational tools/dashboards to all share access to the same data. Historically, if a new data analytics application was going to be built, there would be a unique connection built to tie the isolated data source to the new application. Then, if a second application came along that wanted to use some or all of those data sources for a different purpose, you would have to rebuild that backend, resulting in additional effort. Finally, if the core data source was changed (ie data was migrated to a new and improved system), all of those individual pipes that brought the data to the multiple different applications would have to be rebuilt. The idea moving forward is: By building an analytics platform you can connect once to that dataset and build your applications on top of the platform without having to rebuild the back-end because the data connection is already there. Moreover, if the data source changes, you only have one pipe you have to rebuild in order to get the data back into the platform, but you don’t have to rebuild every application that is utilising that data set.
MSEI: How are you using machine-learning technology to create potential “what if” scenarios that would trigger self-healing functions should something go wrong?
TM: We have implemented self-healing technology called FLISR (Fault Location, Isolation and Service Restoration). FLISR technology is able to restore service following an electrical fault, resulting in a significant reliability improvement compared with the traditional manual restoration process. There is enough data that can identify where that fault is and perform switching operations in real-time, restoring power in just a few minutes. We have been deploying this technology for the past four to five years and we have experienced some significant increases in customer reliability. We have, for the past seven years, constantly topped our customer reliability performance, which is largely due to our use of data and automation with the FLISR self-healing technology.
MSEI: Could this technology also perform diagnostics and alert PG&E to the source of faults on the grid?
TM: We are getting there and have several efforts toward this end. We have recently wrapped up testing and are working toward the full deployment of wireless communicating line sensors.
Several years ago, fault current indicators would be monitored by trouble men patrolling the line. These fault current indicators blink when there’s an outage. The trouble man would then be able to see where that fault occurred.
Now, we’re working to turn these fault current indicators into smart line sensors that enable the smart grid to communicate the location of the fault instantaneously. So, instead of someone having to go and patrol the line and look for the lights flashing, the operator in the control centre instantly knows that outage happened in the segment between line sensor five and line sensor six, for example. We can then begin to perform switching operations and also send a trouble men directly to the segment where that outage has occurred – so that is step one.
The other functionality of the line sensors is to capture really granular waveform data. These are signals coming through the conductor and the goal of this is to be able to proactively identify a waveform signature. A waveform signature could be a tree branch brushing up against a line. This is a scenario where it could’ve been a little bit windy. The analytics would then be able to identify this waveform signature so that the tree branch could be trimmed before the next big storm which might break that branch and cause an outage.
The advantage here is that this technology enables us to take a proactive approach, instead of a reactive approach, to identifying issues on the grid.
MSEI: Will you use this technology for routine asset maintenance?
TM: Absolutely, that’s the goal that we would like to achieve. Algorithms can be trained to identify the assets that are likely to fail or are most likely to fail, or need replacement. We have a reliability programme in place to ensure that we are optimising every utility asset with the use of analytics and machine-learning as well as to refine that reliability plan and reduce operations and maintenance costs.
MSEI: How has machine-learning helped PG&E to process and analyse not only volume and velocity of data but also a variety of data?
TM: We established a pilot project called GOSI (Grid Operations Situational Intelligence). The GOSI project set out to demonstrate real-time data integration and visualisation for distributed energy resources, evaluate benefits and use cases of a single-interface software platform – to provide a single software interface as a tool for distribution operators and engineers/power quality end users.
The project developed key data, system, and user experience learnings through integrating more than 20 data sources into a single visualisation tool allowing users to view complex data sources in ways that were not possible through current solutions. This project formed the foundational learnings which will allow PG&E to potentially explore other complex situational awareness tools and applications to allow users to target information to help manage changes on the grid.
MSEI: How challenging has it been to rethink existing business models and existing value chains, based on how quickly the market is changing? And how quickly technology has progressed?
TM: The challenge is communicating the breaking down of existing processes, and rebuilding these processes in a way that does not work the same but that, qualitatively, we know is better and has an opportunity to drive a lot of value for our customers. It’s a question that we are still in the infancy of figuring out – we know there is value in the data science and machine learning technology toward delivering better safety, affordability and reliability for our customers.
Exactly how we tell that story and how we identify the specifics of that totally new business case, is something that we haven’t been fully able to answer yet. We are involved in a utility programme in California called EPIC (Electric Programme Investment Charge). EPIC is an initiative put forward by the state of California to fund technology demonstration that helps advance grid innovation and helps demonstrate how California’s investor-owned utilities can create the grid of the future through small projects today.
At PG&E, we have really benefited from this innovative programme in that it allows us to have a small project here and a small project there, where we’re able to test different vendor solutions, approaches and capabilities to see what works and what doesn’t. MI.
Image credit: 123rf.