Dirty Little Secret -- Smart Devices are Consumer Electronics
- Dec 14, 2010 12:00 pm GMTJun 8, 2015 10:27 pm GMT
- 2469 views
How then do we know what is, and is not, sufficiently durable? At the moment, we don't know and we need to know. The DOE ARRA Funding requests included reporting requirements that might get at this information, but the intention was to monitor the effectiveness of the funding, not the reliability of the emerging technology. NIST and GWAC have worked long and hard on standards for interoperability -- but do not address standards for measuring reliability.
We at TekTrakker know from our work in with large IT organizations that a great deal of data suitable for driving measurements of reliability is already captured in order to do business. This data can be used immediately to measure reliability internally. This same data can then be used for comparison with peers. The essential task is to start measuring.
Utilities know how many meters are installed because they drive billing. Workorder systems connect problems to products. The techniques to use this data to measure reliability are simple and within the means of any organization. It just takes a little imagination, and a little discipline to adopt a culture that allows reliability to be measured and discussed.
What is Reliability?
Reliability is the absence of failure. Engineers and manufacturers routinely measure reliability using Mean Time Between Failure (MTBF). This is the most basic, simplest, and most powerful of all measurements of reliability. Nothing more elaborate is needed. All utilities need to do is keep track of how many (by model) of each new smart device they have installed, and then compare the quantity to the repairs or replacements made (by model). This is both MTBF of the population and, if the element of time is added, it is also failure rate.
In the recent past, utilities didn't measure reliability of specific products because it didn't matter. Lightning arrestors are a great example. Hunks of metal built to specifications are bought, stored, and installed without needing to know the vendor, the data of purchase, or the remaining warranty on the device. If they need to be replaced -- the work crew grabs another one out of the pile and throws the broken one away.
Device Reliability Matters
Enter the electronic era. Smart meters are already posting failure rates, anecdotally, in the 5% per annum range. This is ten times the failure rate of the traditional meter, and the lifecycle of the product has barely begun. Each additional device connecting the meter to the mothership also has a failure rate. The stability of the grid can only be as good as the weakest device -- yet we don't know which devices are weak. Selecting products for reliability is now essential, but the tools for making associations between products and reliability are entirely missing.
Repairs -- Burgeoning Costs
The sheer volume of repairs needed for electronic (smart) devices is going to require a different repair strategy. Even when architected to be networked with alternate data paths, if any device in the grid fails, then some necessary function ceases and a repair or replacement is needed. The explosion of new devices, each with their own dismal failure rate, is going to be a major issue in controlling costs and executing the business case as presented to regulators. There is only one logical approach to controlling repair costs -- don't make as many repairs. Organizations therefore must select products that are the most reliable from available options.
Use MTBF to Demand Reliability
Once reliability is measured, organizations can press for continuous improvements in reliability by using the power of the purchase order. Products with high failure rates can be excised from the system and replaced by those with lower failure rates. Products can be compared on the basis of reliability, and the judgment made if the premium price of a highly durable product is a better selection than a lower cost competitor. Value and total cost of ownership can be brought into the procurement process.
Setting off a reliability war between vendors will be a net benefit to the entire smart grid industry. Every utility should be measuring itself for MTBF at the model level and asking vendors about the failure rate of their equipment and how they know it and when did they know it. Our experience has been that vendors know far less than they want about their own equipment. Until a lot of equipment is installed in the field, they know only what they can learn from bench tests. Pilot deployments are the test bed. Any utility with equipment under pilot is just as knowledgeable as any vendor, provided they keep track. For this reason we suggest, rather vehemently, that utilities should not outsource this information even if they outsource the labor.
Sharing of pilot data between peers is the most effective, efficient, and timely method for the industry to measure the MTBF of products. Utilities will know failure rate in real time, and will have no hidden agenda to avoid trying to mask the failure rate. Utilities of all sizes and forms of ownership can learn from each other. The broader the range of reporting, the better for the industry,
Share and Compare
When it comes to sharing failure data, utilities aren't doing it systematically. No one should fear measuring failure rate. Product failure, when measured correctly, is not under control of the end user. The failure rate of the product as it was designed and manufactured is the foundation upon which all other variables rest.
Peer organizations are an ideal platform for sharing. Surveys have been the traditional method of sharing at the peer level, yet much more could be done if the work was systematic and not unduly burdensome. We have seen several groups attempt to build databases, but the reality of organizing data and getting widespread cooperation has thwarted most attempts.
Get Ready to Share
Just because it is has been hard to do -- doesn't mean it cannot be done. Every utility should start by making sure it can associate problems with models, and be diligent about recording all hardware failures so that the resulting databases can culled. If workorder systems capture failures, make sure the data is not wholly trapped within free form text. Add a few summary fields with drop-down options so that actions can be categorized. Keep the information historically so that management can review changes in the environment over time. Conduct a baseline analysis of the asset database. Make sure that as new products are added to the inventory, that model numbers are kept so that problems can later be associated with the model.
Every one of the suggestions above will pay immediate dividends in allowing self-analysis and at the same time set the stage for sharing data with peers.