The End of Building Energy Modeling Part 2: Why Best Practices Don't Work
- Jul 7, 2018 10:28 pm GMT
Every building is a complex system. Seemingly inconsequential and difficult-to-identify building attributes can have an outsized impact on energy consumption, skewing our understanding of the overall building performance. So why don’t industry-standard best practices for building energy modeling account for those kinds of nuances?
After decades conducting audits, I began to ask myself this question in 2014. At the time I was managing one of two teams participating in the Department of Energy-sponsored Building Asset Rating program pilot in Massachusetts. My team’s scope of work involved conducting energy audits for 50 buildings, programming streamlined DOE-2 software simulations for each of the facilities and documenting findings in ASHRAE formatted reports. The other team of engineers mirrored our efforts so that two sets of final building energy reports could be compared and an understanding about which elements of asset-driven audits were working, and which weren’t.
It turned out that there was a lot more of the latter than the former. I realized that building energy modeling just wasn’t working, and once the veil was down it was immediately apparent why.
1. Auditing Contracts Don’t Cover the Hours Needed to Evaluate Every Attribute
In one building, after an extensive search and a lot of head scratching, we found undersized ductwork (shrunken to improve the aesthetics of an executive floor) that had a dramatic effect on the efficiency of the building-wide cooling system. That’s just the kind of “needle in a haystack” detail that energy engineers usually have neither the time nor the budget to find.
Auditing contracts often do not cover more than the bare minimum of hours needed to evaluate major systems. This leaves a cloud around minor systems, never mind how all the equipment works in concert. And as I explained in Part One of this series, getting one piece of the energy disaggregation pie wrong skews all of the others.
2. Industry Hierarchies Get in the Way of Precise Modeling Results
One of the reasons why important details can go unaccounted for is simple: energy-auditing tours are typically led by junior building staff. In another building in Boston, both BAR teams found that the energy picture just wasn’t coming together. It turned out that a hidden data center that was unknown to a new facilities technician who directed the tour necessitated around-the-clock HVAC operation.
The facilities staff was initially insistent that there was no 24/7 cooling. That dynamic is further complicated by the fact that, in most cases, energy modeling work is completed by junior engineers. These engineers-in-training normally do not to have a substantial amount of experience with real world equipment operation. As a result, they almost never have the slowly honed, layered understanding of how multiple systems work together that completing 100 or more building projects imbues.
3. Even Experienced Engineers Assess Visually Similar Buildings Similarly
The other result that jumped right off the page on BAR building reports was that energy modeling engineers have a tendency to want to use the same parameters when programming modeling software for visually similar buildings. We learned this in part because the BAR program called for the evaluation of a number of multi-building campuses. At sites where buildings were constructed at the same time utilizing similar construction materials and serviced by a shared facilities management team, the consumption pie charts both teams generated tended to look very similar.
However, a close examination of the utility data showed that the principle energy driver for each building on the campus could be radically different. At one three-building campus, the HVAC equipment and air ducts for a single building were installed by a different contractor than at the other two facilities. Site visual inspections looked the same. However, upon dissecting the utility meter data it was clear that the one building had noticeably different use profiles.
Similarly, on another campus the auditing teams could not visit a large space in one building because the tenant had an unannounced visit from the home office. My team ultimately decided to revisit the building because it was clear from the interval data that something in the building was different. It turned out that unknown to even the building maintenance staff, the spaces that had not been available for inspection had recently undergone an extensive lighting upgrade.
4. Human Error Extends From Auditing Into Computer-Based Modeling
The same capabilities that allow computer-modeling software like eQuest and EnergyPlus to evaluate a staggering list of different building attributes actually compound the likelihood that bias and subjectivity will impact results. Each input and “dial” that must be set in modeling software programs presents an array of opportunities for the insertion of opinions and assumptions into what is supposed to be an objective, quantitative process.
In my over twenty-year career in efficiency, one consistent source of humor is how creative inexperienced engineers can be in experimenting with energy modeling “dial settings” in order to get results to align with actual utility energy use data. In the end, the framework established to ensure the success of the BAR program ultimately documented why the building energy modeling approach does not work.
It became impossible to overlook how systemic flaws in the building auditing industry and human error influence reports when it is documented over and over again in 50 successive reports. Like many industry veterans, I have frequently observed these challenges. BAR forced me to fully come to grips with them.
Pushing the Envelope Towards Accuracy and Precision
Next week, in Part Three of this series, I’ll summarize a series of (ultimately failed) attempts to develop best practices and procedures to avoid the observed problems. I’ll also recount the moment of enlightenment that allowed me to break my twenty-year allegiance to energy modeling. Finally, Part Three details a new approach that my group at Quant Efficiency is using. It promises to establish the level of precision and certainty needed to engage the power of financial markets to scale upgrade projects to a level that will meet sustainability goals and have a real impact on climate change.
Part One – Published the week of (2/26/18) – Moving forward when an engineering gold standard is found to fail.
Part Three – Published the week of (3/12/18) – Micro Interval Analysis – Promise for Quantifying Efficiency Opportunities in Commercial Buildings
Photo Credit: Ian Muttoo via Flickr
No discussions yet. Start a discussion below.
Get Published - Build a Following
The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.
If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.