How’s Energy Efficiency Working? – The need for faster feedback
image credit: Tnsdesign - Dreamstime
- May 20, 2019 12:35 am GMT
- 901 views
If a restaurant had to wait six months or more to find out if their customers enjoyed the food, they would most likely be out of business. Customer and market feedback is critical for business success, no matter what kind of business you run. Unfortunately, most business energy efficiency programs don’t have robust feedback systems for their customers or the marketplace. The reasons this situation exists are numerous and vary state by state. At a high level states that have energy efficiency programs have key functions organized into three key areas. They are:
- Legislative / Policy: The focus of this area is to set the agenda for the marketplace and is usually owned by the public utility commission, which receives input from key stakeholders. The key functions include setting energy saving targets, allocating budgets to accomplish energy saving targets, and, depending on the state, how to go about implementing the programs.
- Implementation: This is the group that has to interpret the intent of the legislative policy and develop program approaches that will work within the local marketplace. People that work in program implementation often have to “bridge the gap” between legislative / aspirational theory and the realities of the marketplace, i.e. the contractors and customers.
- Evaluation: Program evaluation has developed into its own industry. The main mission of program evaluation is to assess whether the program implementer met its savings target, determine if the program design influenced the marketplace (net to gross), and uncover process improvement recommendations within implementation that could enhance program delivery.
I believe energy efficiency works best when there is communication and collaboration between all three groups. In some states, there is very good collaboration and in others, not so much. Some states have developed strict policies to govern the relationship between implementation and evaluation. While I understand why these policies were established, (regulators not wanting program implementation staff to influence the program evaluation process) there are some negative implications for these policies. Here are just a few:
Program evaluators not understanding how a program works: If there isn’t an ongoing interaction between implementation and evaluation, it’s unlikely that evaluation staff will be able to get a complete picture of how the program is operating throughout the year. Some program evaluations only communicate with the implementation group through brief interviews with program staff and a sampling of customer participants.
Customer satisfaction with the program is not fully captured: If the only survey mechanism of participant satisfaction is the annual program evaluation survey, there’s a good chance the program implementer is missing something. Ideally program participants should have the opportunity to provide feedback to the program on each interaction. Participant feedback needs to drive changes in the program process. Without this feedback, inefficient practices or dysfunctional processes will remain in place.
Program marketing efforts not fully analyzed: Marketing is a key part of operating an energy efficiency program. Measuring the impact of any marketing campaign is difficult, but with some upfront planning and backend analysis, it is achievable. There is a natural bias of program marketing staff to claim that every marketing campaign was successful. While it may be true that all marketing has some level of impact on customers, not all marketing campaigns are equally successful. Being able to distinguish the impact of marketing against the cost of marketing is critical in order to allocate program funding efficiently.
So what’s the solution? I believe the best way to improve is to obtain feedback about program actions as close to real time as possible. Energy efficiency professionals working within the evaluation areas could play a major role. Within DNV GL, we have begun to use our evaluation staff to analyze our implementation efforts. Our evaluation staff has been instrumental in helping program implementation build a feedback loop, which provides useful information on the success (or failure) of key initiatives. Through these efforts we have been able to determine:
- Which market segments are performing the best and which are performing the worst
- How participation rates vary across a market
- Which marketing efforts have yielded the biggest impact
By having a feedback loop in place, program implementation is able to adapt and evolve while the program is still running. Program marketing becomes more efficient as well as more effective, and as a result, customers are better served.