LSE - Small Logo
LSE - Small Logo

Blog Admin

October 3rd, 2011

The Behavioural Insights Team’s report on energy use is good first step, but there are still concerns about compensating behaviours, experimental design and the quality of evidence.

4 comments

Estimated reading time: 5 minutes

Blog Admin

October 3rd, 2011

The Behavioural Insights Team’s report on energy use is good first step, but there are still concerns about compensating behaviours, experimental design and the quality of evidence.

4 comments

Estimated reading time: 5 minutes

Paul Dolan reviews the new report from the Cabinet Office’s Behavioural Insights team on Behaviour Change and Energy use, and is surprised that it fails to mention the possibility that interventions to change energy use may be offset by behaviour changes elsewhere. He also emphasises the importance of getting the experimental design correct in future studies, and has some concerns about the quality of evidence presented.

As an author of the MINDSPACE report and a former member of the Behavioural Insights Team (BIT) in the Cabinet Office, I read the recent BIT report on Behaviour Change and Energy Use. I think it represents a great improvement on the first report on health. But I would like to draw attention to three substantive issues.

The most basic issue relates to compensating behaviours and rebound effects. Remarkably, neither is discussed at all in the report. Yet we know from some (admittedly not entirely robust) evidence that about half of the benefits in energy saving are offset elsewhere. And this is without looking too hard at the other possible offsetting effects, including ‘licencing’ effects. We are gathering evidence across a range of contexts (admittedly, largely from lab studies) that these effects could potentially more than offset all of the gains from intervention. The evidence is flimsy but the issues are important and should be made explicit in the report. Drawing attention to such issues will help in the design of the field studies, which should seek to capture as many spillover effects as possible.

The second issue relates to experimental design. I am delighted that the BIT are planning some field studies. These could be flagship studies so they should be designed as robustly as possible. But there are some concerns. In the social networks study, were Kingston and Merton truly randomised? How will the design mitigate for John Henry and Hawthorne effects? And can the study expect to establish any long term equilibrium effects from a three month study? Overall, for the studies described, there is little discussion of time frames of analyses, the sustainability of effects, and the threats to both internal and external validity. The BIT should also make clear how they will randomise and seek to avoid self-selection effects, as these are obviously crucial to how we interpret the results.

The third issue relates to the quality of evidence. In the collective rewards study, what is the rationale for ignoring robust hedonic price regression analysis in favour of surveys that lack any degree of external (and probably internal) validity? Similarly, for the Energy Performance Certificates (EPC), why care about what people say when house prices data tell you what people do in relation to the EPC (I suspect that 18% claiming that EPC’s influence their house buying decisions is an exaggeration of the real effect).

Please read our comments policy before commenting.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Behavioural Public Policy

4 Comments

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.