Wednesday 14 November 2012

Risks in the nuclear power industry


Risks in the nuclear power industry

Risk management in the nuclear power industry is marked by a mechanistic view of people and organisations, impairing its opportunities to learn from experiences and to predict and prevent future risks. Safety researcher Johan M. Sanne argues this in a newly published research article.
Forsmark, 25 July 2006. A faulty connection in the distribution plant outside the nuclear power facility causes an electric flash-over that knocks out the external power supply and two of the facility’s four internal backup power systems. In the control room, large parts of the instrument panel go dark. Without functioning electricity, the cooling system stops working and a meltdown threatens.
The control room operators, however, quickly understood what had happened and acted properly: They connected the facility to the regional electricity network.
Johan-M--Sanne
In the prevailing thinking on safety, people – the human factor – are seen as a risk, while machines are reliable. In the Forsmark case it was the exact opposite: the human factor saved the situation from developing into a catastrophe, and the design of the electrical system was faulty. Despite this, the operators’ efforts were never analysed in Forsmark’s own investigation after the incident.
“Here we had a golden opportunity to learn from good experiences that wasn’t utilized,” says Johan M. Sanne, researcher at the Department of Thematic Studies - Technology and Social Change, who has specialised in studying safety work in complicated technical systems. He has analysed the Forsmark investigation and interviewed key people within the nuclear power industry. And he finds that a too-narrow definition of risk and risk management makes it difficult to predict what could happen.
Two concepts are fundamental to safety work: Risk objects and an improvement script. The most important risk objects in the nuclear power industry are the human factor and the culture of safety. The improvement script is measures connected to these risk objects, such as checklists built on best practices. These lists tend to become more and more comprehensive for every incident, Sanne says.
The culture of safety is treated like a question of attitudes and morals. When it’s bad, it’s owing to bad attitudes that prioritise production over thinking about safety.
“It becomes a moralistic, condemnatory approach that does not help to understand and improve anything. Regardless of how good an attitude you have, you can end up in an acute situation where you don’t have the resources needed; instead you’re forced to make impossible compromises.”
What happened in Forsmark in July 2006 was due to several design flaws, even breaches of the rules. For example, there were a total of four emergency electric systems that were mutually dependent on each other, which they shouldn’t have been according to the rules. The triggering factor, however, was a person: an electrician who, owing to inaccurate instructions, made a faulty connection outside the nuclear power facility itself. The consequences hadn’t been foreseen by the experts.
Sanne presents his argument in terms of “single-loop” and “double-loop” learning. Single-loop is looking at the same things based on the same understanding, and learning in the same way as before. This characterises most organisational learning. Double-loop is reconsidering basic conceptions, for example about what is a risk.
Today, risk analysis consists of identifying problems we already recognize, applying the same solutions as before. The questions determine the answers. Imagining the unknown is difficult; it requires going outside known frameworks. Sanne talks about a mechanistic viewpoint with a linear view of the connection between cause and effect, where all mistakes can be avoided through good engineering design and control.
But instead of building the human factor out of the equation, it should be taken advantage of, he continues.
“The Forsmark operators did the right thing. But their efforts weren’t investigated. We could have a lot to learn there about how people act in complex situations.”
Because, he argues, regardless of how clear the rules are, they are interpreted differently by different people in different situations. Checklists are not enough; experience is required for handling the unknown.
His conclusion is that organisational learning in our nuclear power stations is too important to leave to the engineers. The experts in both the nuclear power industry and the supervisory authorities, are close to each other and think in far too similar a manner. New perspectives are needed. He proposes bringing in social scientists, anthropologists, and organisational researchers.
A year after the Forsmark incident the problem was fixed, and confidence in nuclear power restored. Case closed, Sanne argues, and points out how reports and investigations tend to normalise the abnormal. “It went OK, we made it.” At the same time, his informants in the interviews showed honest surprise and concern over what happened and the vulnerability it revealed. The risks in nuclear power can never be built out of it. But with a broader perspective, expanded reference frameworks and more imagination, perhaps they can still be reduced – that is Sanne’s message.
The article, titled “Learning from adverse events in the nuclear power industry: Organizational learning, policy making and normalization”, has been published in Technology in Society.

No comments: