energy performance gap

The energy performance gap has many causes, but a study led by Prof David Coley of the University of Bath’s Centre for Energy & the Design of Environments suggests the “literacy” of building modelling professionals should be addressed

Many new buildings use twice as much energy and have double the carbon emissions than predicted. This energy performance gap, with its serious financial and climate change implications, has been blamed on everything from builders not installing insulation correctly to occupants leaving lights on. However, my new study suggests another factor is at play: the “literacy” of building modelling professionals.

Many policies and actions are being implemented by governments with the aim of reducing greenhouse gas emissions. In developed countries, buildings commonly account for up to 40% of such emissions, making them a clear focus.

Unfortunately, there is a proven gap between the energy use predicted by models of buildings used to aid their design, or ensure compliance with national building codes, and the monitored energy consumption of the buildings once built. Many researchers claim that the measured energy consumption is frequently twice or more that of the design stage prediction.

Although many studies have explored the energy performance gap from various perspectives, such as the role of poor workmanship or occupants’ behaviour, the literacy of building designers and energy modellers in particular is rarely questioned.

In addition, the literature indicates that, in general, professionals (architects, engineers, sustainability experts etc.) do not tend to criticise themselves and thus a culturally embedded lack of reflection might contribute to the energy performance gap.

Modelling professionals are limited in the time they can apportion to any project and hence need accurate in-built knowledge of the impact that modelling any element of the building in less than ideal detail might have; for example, the impact of missing out a thermal bridge. The basis for these judgment calls might be in part based on experience, but it is likely to also be embedded within an organisation, or just commonly accepted within the modelling community. Professionals in general are known to be open to change if evidence is presented and our new work attempts to provide this evidence in a robust way by asking the question, how accurate in general are such professionals’ judgments?

To judge this accuracy, we use the concept of literacy. This might seem a strange concept to apply in this case, but literacy is tied up with the concept of meaning and in particular whether two people attribute the same meaning to something. Hence a suitable requirement for literacy within a group of building designers is that when given a project they should:

1. Approximately agree on the important parameters (U-values, shading from neighbouring buildings etc.) that need to be included in the model.

2. Approximately agree on the rank order of the importance of these parameters.

3. That their rank ordering of the impact of given changes (perturbations) to the values of these parameters should approximately agree with that given by a thermal (computer) model. These perturbations might arise from small design changes or a lack of information during early stages; for example, the exact size of windows or the hours of use the building might have.

To test this, we surveyed 108 professionals in the construction industry – architects, engineers and energy analysts – all of whom made regular use of dynamic thermal models. We did this by preresenting them with a real building. Then we compared how consistent their responses were as a group (i.e. would you get the same advice from different people), and whether their advice agreed with the computer model, which had been calibrated by monitoring the building and could be considered accurate.

In essence, we were asking how consistent and accurate the designers were during the early design stages of a project, i.e. the moment when most of the important energy-related decisions are made.

The results were surprising. There was no consistent feeling about which parameters (which ranged from air tightness to whether shading from curtains is included) were important, and no correlation between the gut feeling of the professionals and the output of the model. Interestingly, the most experienced, including consultant engineers, performed the worst and had the most diverse performance.

The most successful group were those with very recent relevant master’s degrees. It is likely that many of these participants, unlike those graduating before, sat master’s programmes that contained a large thermal modelling component. It therefore seems reasonable to conclude that this provision should be expanded. However, it is clear that even this subpopulation scored badly and hence those (like me!) teaching such courses need to face some stark realities and improve their provision.

Another possibility is that the culture within engineering consultancy undermines some of the cautionary messages received by engineers during their education. Because thermal modellers rarely compare their results with the performance of the finished building, there is little feedback or learning, and their personal performance might drift over time. This would give, as observed, a diversity of views about the importance of the various driving parameters.

The study, The Building Performance Gap: Are Modellers Literate?, is published in the Building Services Engineering Research & Technology journal and can be accessed at:


Prof David Coley

Professor of Low Carbon Design and Head of the Centre for Energy and the Design of Environments

University of Bath

Tel: +44 (0)1225 385531

Twitter @UniofBath