Building the Foundation for Utility Asset Analytics
image credit: © Wrightstudio | Dreamstime.com
- Dec 2, 2020 5:42 pm GMTNov 25, 2020 6:35 pm GMT
- 339 views
This item is part of the Special Issue - 2020-12 - Data Analytics & Intelligence, click here for more
Authors: West Monroe Partners - Kevin Hade, Eric Anderson, Kojo Sefah
Using analytics for asset management within utility operations is far from a new concept. The power delivery system represents one of the most asset-intensive industries on earth, and utility leaders have been seeking ways to leverage available data to achieve improved reliability and reduce operations and maintenance (O&M) costs. Many asset managers are challenged to put into practice the necessary processes and technology, often relying on spreadsheets and legacy software that was never intended to provide actionable insights. The proliferation of new technology, sensors, and data historians has generated more data with potential to improve asset management.
The utility is often challenged with asset issues such as:
- How do we allocate our limited budget to focus on the highest risk assets (e.g., failure risk or safety risk)?
- Which assets are at most risk of failure that could be most easily predicted?
- Do we replace earlier prior to failure or wait until failure to replace?
Predictive analytics for asset management seeks to leverage available data sets to optimize the utilization of assets to maximize useful life and value to the operation. When implemented properly, predictive analytics can help utilities realize value through improved reliability, decreased operation and maintenance costs, improved safety, and developing a coherent, systematic approach to asset base replenishment. Figure 1 shows how moving from a run-to-failure approach with assets to a condition-based approach which can leverage predictive analytics, can transform utility spending by reducing O&M costs and providing opportunity for more strategic capital spending.
Figure 1. Overall Spending Transformation
As utilities seek to leverage predictive analytics for asset management, fundamental processes and field practices are needed to achieve successful outcomes. This paper aims to lay out the key processes and tools needed to build a foundation for effective electric transmission and distribution asset analytics. By putting better processes and practices into place today, asset data can be better organized and more accessible to apply analytics in a way that is actionable, measurable, and delivers a return on investment (ROI). If a utility deploys such improvements, they can see millions in savings – Duke Energy avoided >$130M in costs thanks to OSIsoft monitoring systems and condition-based maintenance practices..
Figuring out where to begin with implementing predictive asset analytics can be a daunting task:
- Which assets should I use analytics on?
- Is there enough data on my assets to support analytics?
- What data could be acquired that would enhance prediction?
- What new tools or processes could be put into place to enable better data collection on asset failure to inform asset analytics efforts?
- Which departments need to be involved in the initiative and how is alignment/buy-in achieved?
Considering the influx of data on each asset, if the data is unstructured, inaccessible, or not relational, it can be overwhelming and difficult to use for analytics. For example, failure data related to outage events is often sectionalized by protective section, not just at the feeder level. Additionally, some data on a particular asset might be kept in an Enterprise Asset management System (EAM) whereas other data is maintained in GIS and work requests related to that asset are in a third system.
Therefore, it is important to build the enabling foundation for analytics by improving processes and tools which can improve existing analytics and prepare for future analytics initiatives. Some examples of how process or tool improvements can enable analytics resulting in benefits to the utility or customer are shown below.
Figure 2. Examples of value from process or tool improvements
The proliferation of data is often viewed as one of the greatest opportunities for utilities, but some might say that opportunity also presents one of the greatest challenges– harnessing data to create value (e.g., greater return on capital, fewer/shorter outages, fewer safety incidents). The foundation for asset analytics starts with the asset analytics strategy. The asset analytics strategy should address both the use case prioritization – including the business case for and roadmap to achieve a use case - and the processes and tools needed to enable analytics value across use cases.
Not all utility assets are created equal or warrant the same amount of attention. Although this paper discusses common pain points and improvements that can be applied to a variety of assets, a one-size fits all approach to utility asset management is not feasible. Across different asset classes, variations in maintenance requirements, monitoring capabilities, failure modes and regulatory directives may drive differences in strategy. Also, within asset classes, the geography, grid location, environmental conditions, vendor, age, material, and other factors will drive differences in probability of failure and impact of failure. Therefore, utilities should tailor approaches and prioritize asset classes where the most business value can be realized.
Determining which assets to focus on is typically a function of the utility’s ability to collect consistent and relevant asset information while also considering the criticality of the asset to maintain vital utility operations. Despite relatively high criticality and benefits of using predictive methods, the level of effort to implement (including data collection, integration, and analysis) can sometimes outweigh the benefits. For example, distribution transformers, substation assets, and distribution automation assets are typically considered to have a high potential for predictive analytics due to the ability to leverage real-time information and data feeds from AMI/SCADA. Additional predictive analytics can be explored for other assets such as poles, conductors, non-distribution automation (DA) switchgear and others, but they typically require more effort collect and manage data.
After the value from analytics for an asset class is identified, the utility can define what characterizes good and poor health for that asset. Then, the utility can work with their subject matter experts who work on that asset to inventory the available data associated with that asset class – e.g., EAM data, real-time data, maintenance inspection reports – that can be leveraged to build a failure model. This inventory of data can help to estimate the level of effort to implement a specific use case.
Business case analysis is an effective way to align analytics use case efforts and/or determine whether it makes sense to build or buy a solution. The remainder of this paper focuses on the processes and tools to enable predictive analytics.
The ability to collect and organize data is often the primary obstacle to making sensible use of analytics for asset management application. These data challenges are often systemic in that the challenge stems from technological concerns with improper analytics software configuration or problems data quality concerns. Any combination of these factors can quickly diminish a utility’s ability to confidently use analytics for asset management. This results in the utility quickly reverting to the traditional break-fix approach as the risk of overreliance on flawed analytics could lead to catastrophic failures. Therefore, it is critical that utility operators identify and proactively address the process or tool-related obstacles most often associated with utility analytics:
- Data Acquisition
- Data Architecture
- Data Quality & Management
- Organizational Alignment & Change Management
When we talk about data acquisition, we are concerned with what data is needed, where it is sourced, and how it is acquired. Some asset classes are well suited to support real time data acquisition either through SCADA or AMI technology investments, which can be a valuable input to asset analytics. For asset classes where this sensor / monitoring technology may not be available, utilities must look to acquisition of data through other means. The use of paper forms for inspections leads to inconsistent completion, and some records never get used because they are lost or never get integrated into digital storage.
In these cases, many utilities are arming field crews with mobile tablets rather than paper forms to acquire more structured and useful data. Electronic forms can provide more interactive and informative inspection workflows, standardize the data collected and streamline the data integration process downstream. In the longer term, new technologies such as Augmented Reality (AR) and Drones provide additional potential for improved effectiveness and automation of inspection workflows through digital ingestion, organization, and translation of data.
Another best practice is to define a current state and future state process for getting data to the right destination for analysis. Clearly defining the process helps to identify related or supporting datasets and helps to ensure that the data gets to the right end users. This process should document the journey of the data from the field to various source systems and finally a data lake or data warehouse where it can be used for analytical analysis. A data acquisition process flow can help answer the following about the type of data collected:
- Is the right data being collected during install?
- Is the right data being collected during inspection/maintenance touch points?
- Is the right data being collected after failure?
- Is data being stored in the right place, and is it accessible?
- How are corrective actions or replacement decisions implemented, and how does that data get incorporated as an input to the process?
- Are there different departments involved in installing and/or maintaining the asset? Do these different departments use the same work order tools?
The process flow below provides a generic example of such a process.
Figure 3. Example high-level asset data collection process flow
In practice, many utilities find that it is still difficult with digital forms to capture the right information, as the focus in the field is on safety and restoration of service, not data collection. However, data collection can be streamlined further by using analytics to proactively fill out the key data for the inspector/damage assessor to reduce the amount of interaction required to collect the data. In the long run, imagery and information collected from satellites and other technologies will further reduce the workload to maintain good data and improve the insights into the system required for predictive analytics.
Issues with data quality and data completeness are often a roadblock to starting analytics. While systems may be in place to collect various asset data, the data is out of date, incorrect or incomplete and governance/quality oversight of data is not in place.
Some best practices for data quality and management are:
- Data Architecture: Identify the right systems and integrations to correlate asset base (EAM) to maintenance data (EAM/WAMS), outage data (OMS), and telemetry data (SCADA/AMI) to ensure the data is provided at the right interval in the right format for consumption.
- Data Retention: Revisit data retention policies to consider how much data is necessary to train models. Additionally, it is important to retain records on retired or failed assets – historical data on how and when assets failed can help train models to predict future failures.
- Data Quality: Determine data quality measures by which data quality can be quantified and prioritize quality issues. Examples of quality measures are completeness (i.e. number of records across a range of dates) and validity (i.e. whether data conforms to the format or standards for that data type)
- Shared Business and IT ownership of data governance: data needs and process will change over time with new system implementations and evolving priorities from both IT and Business, so both parties should influence governance and share governance responsibility.
- Data Cataloging: Perform data cataloguing to have a reference of all the relevant sources of data, how the data is being ingested, and who the end users of the data are.
The aforementioned practices can fall flat without the proper organizational processes to ensure alignment between field, standards, asset strategy, and analytics teams on value of data and the reason for collecting it.
Alignment between the field and back office is critical. Field crews need an understanding of the right data to collect during installation, inspection/maintenance, and after failure and an understanding of the importance of complete and consistently collected data to drive data quality. Likewise, analytics teams need an understanding of the procedures and processes to identify opportunities for better data and the effort required to get that data.
A fundamental characteristic of condition-based or analytics-driven asset management is flexibility as new information is learned. Some examples of ways utility leadership can foster flexibility and continuous improvement in this space include:
- Create a mechanism to keep teams aligned as programs mature and processes improve
- Define a corrective action strategy to answer the question: How will the predictive analytics inform action, budgets, process, and decisionmaking to replace or perform maintenance on assets prior to failure?
- Formalize alignment training on processes, technology, and the “why” behind the data initiative
- Make data science and data analysis training available to business users to foster an innovative environment for employees and encourage building analytics skillsets
- Suggest adding something around data visualization, sometimes the data is there, even to make the most critical decisions—it is just not visualized in a way to make sense of it all (e.g. the Challenger)
Lastly, many utilities struggle to build internal data analytics capabilities required to develop in-house analytics. It is known that there is a battle for data science talent to perform the fingers-to-keys analytics, but an organization can be deliberate with assigning and training other supporting roles to prevent the data scientists from being bogged down with ad-hoc reporting requests, data wrangling, or data cleanup. Many organizations find success in building their in-house analytics teams by defining clear roles along with training business users on self-service reporting and visualization tools such as Power BI or Tableau.
Moving toward condition-based and predictive maintenance can help to reduce O&M spend and allow for more strategic capital spending. Asset analytics can help make the shift from run-to-failure to predictive and condition-based maintenance, and building a strong foundation with the proper processes and tools will help remove roadblocks and enable utilities to achieve the significant long term savings and benefits associated with asset analytics. To successfully implement asset analytics, utilities must first focus on getting the data right and implement the processes, technologies, and governance required to keep it that way. Once this foundation has been built, utilities can begin to build out the most valuable predictive analytics use cases and move from reactive replacement and time-based maintenance to informed, proactive replacement and condition-based maintenance.