In previous articles on non-technical losses (NTL) we examined what NTL is and why it matters to all of us; and the strategies to detect and reduce them.
Although some level of losses are inherent to utility distribution business, utilities should undertake the efforts necessary to reduce losses to the point that they do not burden society (via tariff or government subsidies) or the utilities (loss of revenue affecting their capacity to invest in a better grid to remunerate the shareholders).
Moreover, it is a utility’s obligation to detect and remove manipulations that could generate security issues.
Detecting and addressing theft is not a one-step action; it is a process involving different areas of the utility. The workflow below summarises the proposed process.
Looking at part of these operations, or to look at them as independent actions, is a sure way to underperform. Let us examine these steps, their interdependency and the critical factors that affect each.
This activity is critical, and usually includes the analysis of data from all customers, looking for patterns that would suggest a possible fraud. Then, identifying the most likely customers to be undertaking such fraud/theft.
Most utilities usually perform this activity via simple filters via SQL scripts or Excel spreadsheets. These filters may look for abrupt decreasing consumption or zero consumption and some other simple clues. Other good – and frequently used information – are meter reader annotations, and data from previous inspections and maintenance work performed on customer facilities.
Unfortunately, these annotations, historical inspections and maintenance records are usually hardcopies and not readily accessible for processing.
This method does not allow jointly checking different data categories and factors that duly correlated would influence and provide them with more detail, allowing deeper analysis and detection. An initial problem is that it will result in collecting and processing an enormous amount of data coming from independent data silos (databases).
Acquiring and manipulating such data requires a significant effort, requiring several person-hours. It is also subject to data manipulation errors, poor or missing documentation of studies and results, and will provide little room for further improvement. Remember that these simple filtering processes normally consider only one data category, not allowing for correlations with several different types of data. Finally, the computational performance requirements for calculations using several data sources have to be taken into account in order to avoid very time consuming processing.
Misidentification of potential theft will lead to inefficient operation and poor results. This process may be reasonably effective when pointing out simple frauds – which by the way are most likely found in low-income neighborhoods and result in low energy (or water) recovery. On the other hand, the process is ineffective with more sophisticated fraud, like the types that professionals commit for high-end residential, commercial or even industrial clients (the “big fish”) that may remain undetected for years, causing significant financial losses to the utility.
Many utilities fail to effectively identify potential theft, resulting in an inability to quickly detect and “catch” the fraudsters, leading to ineffective inspections. This comes at a high cost and, of course, high revenue leakage, leaving many thefts (especially the larger, most valuable ones) undetected while frequently bothering innocent customers with unfruitful inspections. An additional issue is that in such cases the recoveries of energy/water usually depend on the capacity of the customer to pay.
This theft detection method – via manual monitoring, filtering, and reporting – carries potential integrity issues, with a high probability of inaccurate and error prone reporting. It is time-consuming and identifies a limited amount of theft.
The method can be significantly improved when you add specialised software including tools like data analytics, AI (artificial intelligence) and ML (machine learning) techniques, and process automation. Automation by software will allow more input and insight from, for example, geographical and grid (GIS) data, seasonal and weather-related data, social-economic info, and others. Data quality verification and fixing incorrect or inconsistent data is usually a significant implementation effort.
When the utility is already in the process of implementing AMI/smart metering, complexity increases together with the amount of data acquired by modern smart meters as near-real-time data metering and alarms. This will require acquiring data automatically, plus specialized data analytics software that should consider all relevant data. MDM systems usually include some simple data analytics tools which allow limited verification.
Additionally, when metering is remote, there is no visual reading. Consequently, a first visual inspection of the meter, looking for possible irregularities, does not exist, eliminating an important source of clues.
A final remark – as the utility gets smarter with the fraud detection and recovery processes, it is very common that fraud methods also get more sophisticated, and more difficult to detect. Smart meters are more difficult to tamper with and one undesired consequence is that fraudulent customers will also become smarter. Fraud profiles will also evolve because of the evolution of detection tools and methods and it is important that both the process and the software tool are able to adapt to these changes – and evolve.
Once the NTL team has the output of the Identify activity, which is a list of suspect accounts, the next phase is to analyze the filtered customers and qualify them. This means that the analysts will verify the info available and determine what may likely be theft and determine whom to inspect, based on the available data. That requires searching and analyzing data in detail.
Looking at these indications, for example the present and historical consumption, the location, and others, a determination indications – for example, the present and historical consumption, location, and other factors – a decision is made as to whether this customer is worth inspecting, or not.
Unfortunately, the previously described process of manual Identification usually does not provide info that helps to qualify each customer and the consequence is that gathering data for the analysis is not practicable without a significant human effort. Therefore, analysts will do it only in a few cases, at their discretion.
Considering the huge amount of data to acquire and manipulate, the only way to do activities one and two in an effective way is via software that not only does the screening but also provides all data relevant for analysis on a convenient way to be easily consulted and analyzed. This software should include algorithms that automatically estimate the likelihood of theft. The UI (user interface) should have effective means to display all relevant data to support the analysis process so the analysts can verify whatever seems important to them with just a few mouse clicks.
Experience has demonstrated that adequate software may double the effectiveness of inspections.
Once the analysts have done their work and determined who the best candidates for inspection are, it is normal that the number of inspections is much bigger than the capacity to inspect as the number of inspection teams is limited by its cost. The next phase should then be to select what inspections to prioritize.
In order to do the selection, analysts will need to analyze all info to determine a number of selected customers for the inspection teams. This requires the analysts to have easy access to all related information at the tip of their fingers, requiring only a few clicks.
Some utilities support the previous activities with fraud management or statistical packages designed for general use. This requires a long and expensive implementation and customization process and may be useful to some extent but will not provide all the necessary functionalities.
Additionally, the software maintenance process and future updates will usually require a substantial and continuous effort, which is not cost effective.
Field crews perform the inspections as part of the execution phase, and usually are part of other departments under direct supervision of a different manager.
A common strategy is to hire third-party teams from a contractor to perform most of the inspections, sometimes reserving an internal team of specialists to inspect the most technically complex installations, like industrial and high-commercial ones, and to do audits on third-party work.
One key issue for the whole process is to be able to TRUST the results of the inspections. The quality of the inspections and consequently the accuracy of reports may be affected by human performance, or it may not be possible to do the inspection, for some reason.
Incorrect reporting can compromise any of these possible outcomes. Possible causes are insufficient training, not having the necessary equipment to verify a possible irregularity, insufficient time to perform an accurate inspection or even collusion. It is of great importance to provide adequate training to the inspections personnel, and a well-designed inspection and reporting procedure. Besides a thorough inspection, it is of key importance to follow the inspections procedure, to log all actions and, if fraud is found, to accurately take evidence of it for legal processing as these registers will support all further actions.
Every inspection outcome is an important new piece of information to verify the accuracy of the predictions and of all previous processes, and therefore needs to be monitored. Understanding what went according to the prediction, what did not, and then learning about the problematic issues, is key for improving.
Needless to say, the amount of data is, again, huge and not practical for manual processing. Data should be automatically fed back into the analytics software that would do most of the task, re-evaluating its business rules, self adjusting its internal parameters, via machine-learning techniques, and pointing out incongruences.
Besides software-supported verifications and optimizations, there will be situations when an inspection does not report fraud but the team must further analyze the case. To give an example, once we found a very clear situation of unexpectedly low consumption by some customers of a new building. It turned out to be an updating problem on the billing system, whereby the software was excluding these customers. Other typical examples are customers incorrectly registered, broken or miscalibrated meters, metering data wrongly entered on the billing system, inspections that were scheduled but not undertaken etc. Business rules should check for such issues automatically.
Another important aspect is the accuracy of the inspections, and automatic verifications of all suitable measures for the inspections. Some examples: register the timings of each inspection; check the conformity of the inspection results of each team to the software predictions; and check the number of “not-inspected” compared to the results of other teams.
Analysts must keep track of the performance of the complete process. The analysis should include bad results or bad conformity of results to the predicted ones. This post-mortem monitoring and analysis is key for the next step – to find flaws in the process, in the inspections and in the business rules, and to improve. The supporting software tool must additionally include business intelligence functionality to help analysts in all tasks related to this step.
The improvement process is not easy as, again, the huge amount of data gathered during all previous steps will constantly increase and should be considered in the learning and improvement process.
In previous projects, fraud databases have easily grown to several petabytes.
The analytics software should support it without performance issues. Each cycle adds new info to the knowledge base.
Artificial intelligence and machine-learning techniques should support this processing.
Key performance Indicators
The above-described process has one expected objective – to help discover and eliminate as much theft, non-technical loss and security risk as possible, and maximise available resources.
As with every process, it is important to monitor all results and provide data for verification, correction, management, auditing and improvement. The system needs to collect data on results and calculate the performance indicators, and support informed decisions. The most common (and most important) KPIs are:
• AT&C – all technical and commercial losses of a distribution utility. It is calculated as a percentage and measures how much energy or water is not billed to customers compared to the total that is produced/ purchased and distributed to customers.
• NTL (Non-technical losses, sometimes referred to as commercial losses) – it estimates how much energy or water is not being billed to customers due to non-technical reasons. It is normally expressed as a percentage calculated by [NTL = (AT&C – TL) / AT&C]. It is comprised of fraud, theft, and process issues (metering, human and system errors). TL stands for the technical losses caused by the imperfections of the physical processes of distribution; for example, electrical impedance and water leaks.
• Effectiveness of field inspections – a percentage calculated as the total number of theft cases reported by the inspections, divided by the number of inspections performed.
• Productivity of the field inspections – a percentage that is calculated by the amount of subtracted energy or water (meaning the amount of energy or water that was not registered and not billed for for during the period of fraud), divided by the number of inspections.
Although these are the most important, there are many other PKIs. The software should monitor and calculate the PKIs and make them available in a timely manner on a dashboard to authorized users – managers and directors.
Defining the team
Besides implementing a well-designed process, as described above, a utility has to staff the boxes with a skilled team. The Loss Analysts require a new skill set. This is a key function and one of the most difficult in which to find skilled people. On the technical side, these professionals need knowledge in two different fields, IT and electrical (or gas or water) distribution.
These skills will ideally empower them to execute the operations of the “data analytics team” and deliver the following outputs: goal analysis, process management, analysis of actions and business rules to mitigate losses, selection of targets for inspection, analysis of results and design enhancements.
Selecting the right software tool is necessary to perform such tasks and monitor and control the complete process in an efficient manner. A fraud management or statistical package tool will help to do part of the job, mainly steps one to three. However, it will require costly implementation, customization, human operations, maintenance and upgrades and therefore result in expensive operation and high TCO (total cost of ownership), substantially above the specific cost of the software package licences.
Time is another significant dimension, as theft will not stop and wait while you develop your software tool – and the monthly cost of lost revenue might be your highest financial consideration.
Implementing a solution that is easy and fast to implement, and will support the utility in all the process steps as discussed in this article, requires a powerful, easytooperate solution that can be integrated with your corporate databases and cover all the process steps to support you on a successful NTL reduction journey. SEI
About the author
Rui Mano is VP at Choice Technologies. He has extensive experience in smart grids, automation of electrical systems IoT / SCADA / EMS / DMS systems and analytics for the detection of fraud/theft and reduction of losses. Mano is an author of articles and lecturer at conferences in Latin America, the US, and Europe. Choice Technologies is a technology company specializing in revenue protection analytic systems and methodologies.