Knowing More Sooner: Making Real-Time Evaluation WorkPosted to AESP
- Dec 12, 2018 7:00 pm GMTDec 12, 2018 7:05 pm GMT
- 649 views
This item is part of the AESP - Special Issue - 10/2018, click here for more
By Linda Dethman, Paul Schwarz and Courtney Henderson
In response to our industry’s goals for greater (yet cost-effective) energy savings, stronger climate mitigation, and more engaged customers, energy efficiency programs are changing. Recently, two types of programs have been on the upswing in an attempt to meet these goals:
Pilot programs that test emerging technologies or innovative behavioral mechanisms to influence energy use. Sponsors hope these shorter-term initiatives (typically one to three years) lead to scalable, replicable, and cost-effective programs. Pilots benefit from working with evaluators upfront to conduct market research and to make sure the pilot can be evaluated. Once launched, pilots benefit from the rapid feedback evaluators can provide and their insights about a pilot’s full-scale potential, often before all results are in.
Market Transformation programs that intend to change, over a longer time frame (typically five to 20 years1) “the structure or functioning of a market or the behavior of participants in a market.”2 Sponsors hope these long-term investments result in widespread adoption of higher efficiency equipment and habits. For Market Transformation programs, evaluators conduct upfront and ongoing market research. Once underway, these programs need to have regular and cumulative assessments to measure progress and to convince funders to stay the course.
To ensure Pilot and Market Transformation programs succeed, the industry is increasingly looking to Real-Time Evaluation (RTE) models that follow a "test, measure, adjust, then repeat" strategy. At its core, RTE is a continuous improvement approach, whose sole focus is to improve programs, products, and services. Evaluators are able to supply ongoing feedback and recommendations during implementation so that implementers can make needed changes. Both evaluators and implementers seek the benefits of these approaches and want to better define best practices for them. This article is intended to open a conversation for how these approaches can be more widely used in our industry.
A Quick Tour of Real-Time Evaluation
The concepts of embedded evaluation, program optimization, and formative evaluation are intertwined with real-time evaluation. While all RTE approaches focus on timely research and insights, embedded evaluation and program optimization approaches involve evaluators during all phases of a program’s life. During the design phase, evaluators work with program designers and implementers to ensure the program is evaluable, and that relevant data will be collected. They then continue to work with implementers to respond to evolving program needs, provide rapid research, feedback and recommendations to meet those needs, and suggest additional evaluation activities. Formative evaluations tend to involve evaluators early in the implementation process. Evaluators develop and follow a RTE plan that, when implemented, provides early feedback that influences program evolution. All RTE approaches can include impact and process evaluation components and all can provide progress and outcome reporting.
We should note at this point that for RTE to work, evaluators and implementers need to agree to be nimble and to recognize that research conditions may be challenging (for instance, dealing with small sample sizes and very tight turnaround times). In addition, while evaluators and program sponsors need to form trusted relationships to ensure they can openly discuss problems and solutions, their roles are very different. Evaluators provide essential data and analysis to implementers; implementers make decisions and implement.
Choosing Between Real-Time Evaluation and Traditional Program Evaluation Approaches
On the surface, our suggestion for a greater use of RTE approaches may seem obvious. Who can argue that we need to have more timely and helpful evaluations, or that we should ensure that programs are actually evaluable? However, energy efficiency programs often employ more traditional summative evaluation approaches that distance designers and implementers from evaluators, involve evaluators later (often after a program has been underway for a year or more, or even concluded), take longer, and focus less on improving programs. Still, there are good reasons to use either of these approaches.
In this section, we offer guidance for choosing between RTE approaches versus Summative evaluations, give a brief case study to illustrate how RTE is benefiting a Pilot and Market Transformation initiative, and discuss how RTE approaches should be a viable choice in our evaluation repertoire.
Table 1 presents a set of questions and answers that allow program sponsors to compare RTE approaches with Summative evaluations to help them choose the best one for their programs.
Table 1. Comparison of Real-Time Approaches with Traditional Summative Evaluations
A Telling Example: RTE and Embedded Evaluation Looks at a “A Movement to Reduce Energy Waste for A Better Future”
The business side of PG&E’s Step Up and Power Down (SUPD) asks businesses in targeted areas to join a multi-faceted movement to reduce energy waste. It is a Pilot initiative but intends to create long-term change in how targeted businesses use energy and, if proven feasible, to expand beyond its current target areas. PG&E has three goals for its work with downtown businesses in San Francisco and San Jose: increase customer awareness of the utility’s current efficiency programs; change the behavior of workers, guests, and facility managers in downtown businesses through targeted interventions; and drive downtown businesses to participate at increased levels in existing energy efficiency programs. During the design phase, evaluators worked closely with strategists and implementers to identify key segments, ensure evaluability of various components, create an evolving logic model with key research questions and indicators, and provide insight to support the development of SUPD behavioral interventions, and outreach.
Evaluators conducted pre-launch market research, including a web-based survey of 200 small and medium businesses (SMBs) to establish awareness and participation baselines. To gauge the initiative’s appeal to SMB targets (retailers and food service owners), focus groups tested value propositions, services, website visuals, and intervention ideas. In each case, evaluators presented topline insights and suggestions to initiative decision-makers within one week. The research informed sponsor choices for the materials, marketing, language, and messaging needed to reach SMBs.
Now that the initiative has launched, evaluators are a “continuous improvement” team that meets regularly with third-party implementers and utility evaluators. The evaluation plan for the next six months takes a flexible approach and depends upon pinpointing needs, quick turnaround, and collecting and amalgamating data from multiple sources. Demand for the team’s services has been high, partly due to trust established during the design phase. They are documenting the initiative’s history, data tracking processes, and lessons learned to date, establishing key performance indicators, conducting a market experiment to test message framing in outreach materials, and establishing a baseline for large customers.
Championing Real-Time and Embedded Evaluation – Can We Talk?
Both RTE’s embedded, continuous improvement, and formative approaches, and traditional evaluation’s more arms-length summative approach, offer benefits to clean energy program sponsors. We believe developing the clean energy programs of tomorrow requires learning, collaboration, adaptation, and all experienced hands on deck to improve programs. RTE capitalizes on all of these principles. Let’s make the greatest use of the market research and intelligence skills, the understanding of Big Data and the Internet of Things, the wide knowledge of programs, the third-party watchful eye, and the objective and savvy feedback approaches that evaluators bring to the table. Let’s find Pilot and Market Transformation programs where the value of having evaluators in the trenches to help make programs better, outweighs any risks. Let’s try RTE and see where it takes us.
Linda Dethman is a Vice President and Paul Schwarz a Managing Director at Research Into Action. Courtney Henderson is a Senior Evaluation Advisor at Illume Advising. Jane Peters of Research Into Action and Laura Schauer of Illume Advising also contributed to this article. The article was contributed by AESP’s Market Research, Evaluation and Greenhouse Gas Topic Committee. (Back to Top)
1MT can occasionally occur more quickly.
2The TecMarket Team. California Energy Efficiency Evaluation Protocols: Technical, Methodological, and Reporting Requirements for Evaluation Professionals. State of California Public Utilities Commission. April 2006.