You are sitting on an airplane, enjoying the end of a flight back home. The flight lands ok, the pilot and crew thank you for choosing airline X – persuasively saying that they look forward to seeing you again on board – the doors open ok and you can happily disembark the aircraft. It happens for thousands of flights daily: How is it possible?

Each flight is subject to a large variety of potentially risky manoeuvres and operations, ranging from taking off to landing and opening the aircraft’s doors. What makes all these daily operations safe? To find out, I looked at safety practices in a large airline, interacting with personnel from the safety department and observing some of their tools at work. The research focus was not, as usually happens in studies of risk management, on crises and incidents but on the systems and processes that are ‘green’.

The argument is simple. The study of the extraordinary rare incident is likely to end up with conventional explanations for failure such as ignored warnings, information deficits and misdeeds in the wake of disasters. The study of what makes routine operations safe can be instead intellectually stimulating, not least because it allows us to learn about the functioning of complex operations. It also offers practical lessons. Not surprisingly, following the financial crisis and large-scale corporate failures and scandals there seems to be an increasing interest in learning from safety practices in aviation by institutions working in the financial sector. An indicative example is a recent document published by a professional association of internal auditors as part of a market-led initiative to gather insight into corporate culture and the role of boards in measuring and assessing culture.

The initial interactions with personnel from the safety department made clear the importance of something called ‘just culture’. This reflects an ‘atmosphere of trust’ in which people are incentivised to report safety information, but in which they are also clear about where the line must be drawn between acceptable and unacceptable behaviour. In other words, if you report something, it should be clear what happens next. A genuine mistake is ok. A wilful violation is not.

Is ‘just culture’ all that matters? Not quite. The main lesson from the case is that a widely touted motif and aspiration for a ‘just culture’ works if hardwired and operationally expressed in processes, monitoring systems and technologies.

There are two facets to the way in which just culture is hardwired in processes and monitoring systems. On the one hand, they are a source of support to risk identification and analysis. Reporting should be easy. The analysis of what is being reported should also be easy. For example, in the airline, there are no thresholds to what needs to be reported (simply: everything!); it is possible to report via smart phone apps with access to the internal reporting system; the system helps to support cross-functional interactions as the investigation progresses; the system also helps to create an audit trail of everything that is being done until feedback is provided to the reporter.

On the other hand, processes and monitoring technologies are also a source of discipline, making it almost impossible for people not to report. Various monitoring systems recreate a ‘room with no corners’. Possible issues such as doing the wrong turn while flying the plane is picked up anyway. So, you better report any problems before you are made to report it or found out not reporting it. As put by one safety manager of airline, that ‘helps’ people to be honest as there are ‘no corners to hide’.

Somebody sceptical, perhaps due to frustration with previous attempts to work with (risk) culture might say: reporting and analysing risk is one thing, but making it meaningful and changing behaviour is another. In addition to notions of just culture and technology, you need to make people realise something is wrong and grab their attention.

What would ‘grab’ their attention? In the case of the airline, safety personnel were focusing more on building trust and leveraging some kind of sense of humour rather than formal policies and prohibitions.

One example made this focus clear. Flight monitoring technologies may show landings outside the normal parameters. This is not always bad. For instance, in some cases, it may reduce transfer time to the terminal buildings. So there is no immediate risk, but there is a latent risk that people get used to that behaviour. And a long landing is not a good idea in specific contexts such as a short runway. So monitoring technologies tell you about a potential issue, and people may have also reported about it. In line with a notion of just culture, this reporting may lead to actions to change behaviour to correct what can be seen as a genuine mistake, i.e. something done because it can lead to some benefits. What can be done? You can define a formal policy saying that long landings are bad. Or, as in the case of the airline, you can take satellite’s photographs of the runway and overlay that on top of the actual touchpoint on the runway where the aircraft was. And then send this directly to the crew, and say: ‘How do you feel about it?’

Hopefully that would make pilots and crew members feel a bit more uncomfortable. And us – as passenger – a bit more comfortable when catching the next flight home.



Tommaso PalermoTommaso Palermo ( ) is Lecturer in Accounting at the London School of Economics and Political Science. His main research interests include the design and use of risk and performance management systems, risk culture in financial sector organizations and risk reporting and analysis in the aviation sector. Tommaso is also involved in a project that examines the creation of markets for contested commodities, such as cannabis.