How to do a Certification Program the Right Way
- Nov 24, 2014 12:00 pm GMT
- 2100 views
Let's begin under the premise that you (end-user, utility, aggregator, etc.) want to acquire technology from different vendors, specify the communications interface between products and have all such products install and work together easily and quickly (similar to adding a new printer to your PC - sometimes it's called "plug and play").
The traditional electric utility industry has treated information technology as one-off, customized systems, but is taking advantage of best practices learned in other areas of technology. The traditional approach is time-consuming and generally includes proprietary or custom-built solutions. It is expensive to develop, maintain modify and use.
Aiming for off-the-shelf, "plug and play" interoperability of systems and components is a goal of smart grid standards, with tremendous payoffs, such as reduced overall costs (for all parties); faster deployments; minimized stranded assets; reduced maintenance and higher quality results. Getting to this point takes a new set of best practices and behaviors, and at times can be a challenging learning process. Certified products are the starting point for achieving the goal and help navigate the sea of smart grid technologies in the global market. Certification is a key indicator that product vendors are committed to delivering interoperable products.
Achieving Interoperability (Plug and Play): What does it take?
Fundamental to achieving the goals of the industry is an understanding of the challenges associated with bringing a standard from paper to practice. Most important is the recognition that because a standard has been adopted internationally does not mean every vendor will implement it the same way. A simple example illustrates this challenge:
In OpenADR 2.0, a Demand Response (DR) event message can have an "importance value" ranging from 0 to 3 (0, 1, 2, or 3). However, this value could appear as a whole number (0, 1, 2, 3) or as a decimal number (0.0, 1.0, 2.0, 3.0) in the messages exchanged between devices. Although both these numeric representations are valid for a floating point value (the schema requirement), if one vendor's implementation is looking for "1" in a received payload and another is looking for "1.0" when they exchange an event message with the importance value, one of the receiving vendor's systems may not understand what is being asked of it. This kind of problem is typically uncovered in the field when two purportedly standard implementations try communicating with each other.
Through vendor and customer staff investigation of the products, and then negotiation among the parties, one vendor will make a change. But what happens when a customer purchases two other "standard" systems possessing different interpretations of the specification? The whole discovery process must be employed again and the result is two customers with essentially unique solutions instead of "plug and play" standardized solutions.
Multiply this scenario by hundreds of opportunities in any standard for differing interpretations and it is apparent how interoperability problems happen even with adopted standards.
There are a number of industry practices that have evolved over the years for achieving "plug and play" interoperability. The key practices may be broken down into two primary activities: 1) conformance testing and 2) interoperability testing. Conformance is simply validating that a vendor has implemented according to the standard. This is typically done with a conformance test program that is referred to as a "certification" program if conducted by a formal industry alliance, such as the Wifi Alliance, BlueTooth SIG and OpenADR Alliance.
But in the real world of deployments, it is not unusual to have two conformant products that do not interoperate. A conformance test program is a necessary first step that eliminates many of the headaches that would otherwise occur in getting products from two vendors work well together. This cannot substitute for actual interoperability testing of systems and devices expected to function together in a specific scenario. Why? Even with every capability and all options defined in a standard tested for conformance, it is still possible that some sequence of interaction between two systems turns over an interoperability issue.
Achieving conformance to a standard requires well-written standards that include precise "conformance" statements in the standard itself and interoperability testing as a compliment to conformance testing. Well-written standards contain statements specifying what a product must do to be "conformant" to the standard. This is usually a well considered subset of all possible implementations of the standard and serves as the basis for a Protocol Implementation Conformance Statement (PICS). While gaining industry consensus on what it means to be "conformant" to a standard can be challenging, it is crucial to the creation and operation of a quality conformance certification program.
It is also a well understood best practice that to achieve our goal of "plug and play" interoperability it is important to bring together real products from real vendors with complex test scenarios to insure they work well together. Industry alliances do so in what are termed "plugfests" or "interops" or "test events" and these serve to surface the interoperability issues not discovered and addressed in formal conformance certification programs. More mature certification programs even include interoperability testing as part of their certification program.
At the end of the day, unless different vendor products are tested together under a rigorous set of scenarios, plug and play in practice may take some work. If there is a rigorous conformance certification program in place and you start with certified products then 80% or more of the necessary interoperability work may well be accomplished!
The Role of Certification Programs
As discussed, a critical part of achieving plug and play products is to develop and execute an industry conformance certification test program. The greatest value of such a program is that the industry (vendors and key customers) must agree ahead of time on the precise interpretations of the standard and how the implementations are tested to prove that the implementations are done correctly. This is what a "certification test" program does.
Readers need only think of USB, Wifi, BlueTooth product logos to understand the benefits of mature certification programs that deliver plug and play functionality to consumers. These programs took years of industry work to achieve the precise definitions of conformance (and interoperability) that have led to the ease of interoperation we are used to.
In the smart grid area, a number of programs are in the process of achieving maturity but will only do so when the industry understands how to achieve plug and play and requires it from the alliances implementing certification programs.
From a practical standpoint, there are far more possible tests that can be done than makes sense (from a time and cost perspective.) Therefore, a certification test program follows the 80/20 rule - the 20% of functionality that is most likely to be implemented will get the most focus from a conformance testing perspective. This is not to say that the other 80% doesn't get tested at all, just less rigorously. And some aspects of the standard may be out of scope from a certification testing perspective. For instance, OpenADR is a message exchange protocol. While the standard does define how events can be targeted at specific devices (such as a pool pump), verifying the actual functional behavior (did the pump shed load) would be outside the scope of conformance testing. As long as the message was exchanged, the protocol did its job.
Certifications that are well designed and executed are extremely valuable for the following reasons:
- - They serve as a starting point for implementing a system that requires multiple vendor's products - specifying certified products greatly reduces integration issues, time and costs;
- Certified products significantly reduce the effort to implement systems in practice which incorporate multiple certified products;
- Certified products indicate that a standard is likely to be around for a significant period of time, reducing the risks and costs of stranded assets based on obsolete technologies;
- Reduced costs of integrated solutions have long term dividends;
- When a good certification program is in place, it is relatively easy to replace one vendor with another, enabling price competition as well as reduced impact of vendor failure.
To be clear, not all certification programs are created equal. The key attributes that indicate the maturity and quality of a certification program include:
- - They are based upon a detailed agreement between industry stakeholders (the PICS document) as to how a standard must be implemented to achieve interoperability functions and features;
- They provide a rigorous and independent verification that the claimed conformance does exist. The independent testing is provided by highly qualified 3rd party test labs;
- Certified products are typically re-certified when major updates are made, insuring that the updates are also conformant and certified;
- The test tools for certification are commercial quality and available as pre-certification test tools for vendors planning to certify products and users of the technology to use as acceptance tests;
The following case study of the OpenADR Alliance represents one of the leading programs as judged against the Smart Grid Interoperability Panel's "Interoperability Process Reference Manual" (SGIP IPRM ). To gain a deeper understanding of what makes a certification program valuable, the materials in the IPRM provide an excellent roadmap to a mature certification program.
A Certification Program Case Study
We know that a good certification program will improve "plug and play" in the sense that everyone is obeying the rules but there are other factors that influence the achievement of this goal. One critical factor is how much of the functional behavior in a system is outside the scope of the standard itself. In the case of OpenADR some aspects of how the standard is used are deployment specific and the usage models are outside the scope of the OpenADR Profile Specification.
The OpenADR Alliance is working on methods to reduce sources of interoperability issues that are outside of the scope of the standard to compliment its certification program. The certification program model incorporates best practices from other industries and is evolving into an outstanding program. The core elements of the program include:
1. A detailed Protocol Conformance Implementation Statement (PICS) that clearly specifies what being conformant to the OpenADR 2.0 standard requires. A PICS document is an articulation of the testable requirements for the standard and serves two purposes. It guides the development of specific certification test cases and it acts as a tick list for vendors to assert that they have implemented all the requirements.
2. Detailed test specifications, which guided the development of the certification test harness.
3. Separate certification test harness and certification lab vendors. This insures the existence of one official certification test harness (implementing the test specification) and enables the OpenADR Alliance to add multiple Labs while insuring consistent certification by all labs. 
4. Ownership of the test harness by OpenADR Alliance. This asset provides the OpenADR Alliance control over a key aspect of the program.
5. Availability of the test harness to vendors and others as an engineering test tool as well as a pre-certification test tool. The modest test harness cost is reasonable and typically pays for itself in reduced certification costs. In the first 2 years of offering the test harness, over 60 companies have acquired the test harness and almost every company seeking certification of their products uses the test harness for pre-certification.
Although we have yet to implement it in OpenADR, most certification programs include some kind of "golden" devices to test with for device-to-device interoperation validation. The OpenADR Alliance does conduct periodic test events to accomplish the interoperability aspects of "plug and play" best practices.
There are other aspects to making an industry alliance a success, but the key attributes of the test and certification program are providing rapid acceptance of "Certified OpenADR Compliant" as a quality certification that greatly increases the interoperability of products developed based on the OpenADR 2.0 standard. 
 See Smart Grid Interoperability Panel Test and Certifications Committee "Interoperability Process Reference Manual, Version 2.0", January 2012.
 Some certification programs allow each lab to develop their own certification test tools, which leads to potentially different certifications and interoperability problems.
 For more reading on this topic, see an excellent article published by the Smart Grid Interoperability Panel titled "White Paper on Value of Smart Grid Testing", September 2012, http://members.sgip.org/apps/group_public/document.php?document_id=982&wg_abbrev=sgip-sgtcc