Percent Energy Savings

Blog

The Towards 50 Energy Efficiency Blog

The End of Building Energy Modeling: Part One

Towards50_EndOfBar.png

Moving Forward When an Engineering Gold Standard for Building Energy Modeling Falls Apart – Part 1 of 3:

Originally published on The Energy Collective: The world's best thinkers on energy & climate

Questioning a widely accepted standard for energy engineering is not to be taken lightly. Yet in 2014 I found myself doing just that. While collaborating on the Department of Energy-sponsored Building Asset Rating (BAR) program, my faith in the gold standard of energy analysis was shattered.

In the process, I came to the conclusion that computer-based energy modeling platforms like eQuest and EnergyPlus are not statistically accurate for quantifying energy use in office buildings. Perhaps more significantly, I realized that the energy models they produce cannot be relied upon when evaluating the economic viability of efficiency upgrades. These findings call into question not only energy modeling but also the technical review standards required for most energy efficiency programs and even the emerging Investor Confidence Project.

This post is the first in a three-part series that outlines the inherent deficiencies within building modeling. I’ll explain why, in hindsight, it’s not surprising that current modeling practices struggle to meet the needs of the energy-engineering world. And finally, I’ll introduce a new approach that promises to deliver the accuracy and reliability needed to advance building energy saving opportunities as a commodity for financial markets

Part 1.jpg

A BAR Primer

In 2010, the state of Massachusetts issued its Clean Energy and Climate Plan for 2020. The initiatives contained therein amounted to one of the most aggressive efforts to reduce greenhouse gasses in the United States. A 25% reduction (of 1990 levels) by 2020 and an 80% reduction by 2050-ambitious targets, to be sure-made up the cornerstone of the plan.

Recognizing that emissions from buildings make up roughly 39% of all greenhouse gases, improving efficiency within commercial and institutional spaces became an immediate priority.
Massachusetts initiated the Building Assets Rating (BAR) program, a joint effort between Northeast Energy Efficiency Partnerships (NEEP) and Massachusetts Department of Energy Resources (MA DOER). The BAR program was an attempt to identify and formalize best practices for evaluating and quantifying the energy use in commercial office buildings. A subordinate intent was to investigate opportunities to reduce the cost of comprehensive building energy analysis.

The volume and time frame in which such a large number of buildings were comprehensively evaluated in rapid succession was notable. The Building Asset Rating Program audited 50 buildings, created models and issued final reports for each in less than two years.

A brief overview of the Building Asset Rating Program:

  • Two independent teams evaluated fifty office buildings in the greater Boston Area.
  • Both engineering teams had access to the same technical materials, drawings and engineering documents for each building (available documents varied based on information provided by building staff).
  • The teams conducted site visits at the same time, with the same building personnel and witnessed the same building spaces.
  • Each team worked independently to analyze the available building specifications, create an energy model of each building, programmed streamlined DOE-2 software simulations for each of the facilities and documented findings in ASHRAE-formatted reports. All reports disaggregated the total energy consumption for each building into end uses including lighting, heating/cooling, and other facility-specific equipment end uses like pump energy.On a high level, the thinking was to have two respected auditing teams work independently, but in parallel. This enabled side-by-side comparisons of the final reports to identify consistencies and deviations within the results. In a nutshell, alignments would reinforce practices that were working. Irregularities would show which elements might need a different approach.

Differences in Disaggregation Distort Disproportionately

At the conclusion of the BAR program both engineering teams were given access to the two reports produced for each of the 50 buildings. In comparing the two teams’ work, it was immediately clear to me that there was a problem. Namely, there was a high degree of variability between the disaggregation of total energy to the various building systems and end uses specific to each facility.

Patterns emerged in the way that each auditing groups tended to identify principle energy uses. That’s problematic because the assessed amount of outdoor air ventilation or power consumed by lighting, for instance, goes on to influence all of the other end uses like plug loads. The cascading nature of disaggregation means that one end-use estimate impacts all those that follow in the models.

The energy pie must add up to 100%, so the amount of energy attributed to conditioning outdoor air also determines much energy is then available to be assigned to lighting. Outdoor air affects lighting affects plug loads, and so on and so on…

In this way, any significant variation in the amount of energy assigned to primary attributes determines the magnitude of potential savings associated with implementing energy upgrades for each individual attribute.

For example, if one team assigned 29% of the energy use for a building to lighting and the second team 21%, there are often substantial implications for calculating potential financial returns on lighting upgrades. The 29% estimate would indicate that lighting improvements are significantly likely more financially viable than the 21% number.

User Bias an Underlying Weakness

The BAR program demonstrated that even with two top engineering teams, modeling is heavily subjective. Observed settings programmed into models both by my team and the other engineering group quickly appeared to be heavily driven by personal judgments, rather than concrete, identifiable building attributes.

The standing assumption is that energy use is parsed based on standardized, well-vetted engineering best practices. Yet the BAR results indicated that underlying reliance on user impression of building engineering and subtleties related to programming preferences in configuring modeling software are a major factor. (Additional modeling challenges are outlined in Part 2 of this series.)

In closely reviewing the reports from each team it became clear that both groups had fairly predictable tendencies in how they apportioned energy-meaning that most likely, neither result was actually accurate.

Real Data: The Only Route to Accurate Understandings of Building Energy Use.

Only after comprehensive analysis of smart meter interval data for each building did I feel certain ranking the accuracy of one report over another. Smart interval meters sample building energy consumption every 15 minutes or even every five minutes. By closely evaluating power consumption patterns in meter trend data it was possible to increase the certainty of specific engineering assumptions that were added to individual models.

As shocking as I found it, creating statistically relevant evaluations of the energy use in large numbers of buildings is not possible with computer-based energy models. That’s not an easy conclusion for someone who has spent most of his career believing in modeling.

The good news is that there’s hope. In Parts 2 and 3 of this series, I’ll dive deeper into the origins of the modeling deficiencies and layout a potential way forward that depends more on data and less on bias and guesswork.

Read: The End of Building Energy Modeling Part Two: Commercial Building Energy Audits Don’t Work

Read: The End of Building Energy Modeling Part Three: Micro-Interval Data Delivers

Matthew Conway