For those unfamiliar with the term, Russian roulette refers to loading a revolver with a single round, spinning the chambers so that the position of the live round is unknown, and then placing the muzzle against your temple and pulling the trigger. With just six chambers, the classic version of the ‘game’ is extremely risky and, understandably, you would likely refuse to play. But what if there were a thousand chambers and you pulled the trigger just once a year.* Ready to play? Perhaps you already are - with your process plant.
Playing Russian roulette (process style) was recognised by Diane Vaughan, who, in her study of 1986’s Challenger space shuttle disaster1, coined the phrase “normalisation of deviance”. The erosion of the o-rings that led to the disaster was a known occurrence, but NASA was lulled into a false sense of security because the observed erosion had never caused an actual failure.
Familiarity, if not actually breeding contempt, had led to a discounting of the risk; the partial failure “deviance” became accepted as routine, it became “normalised”. We must counter this normalisation of deviance by maintaining a sense of vulnerability; near-miss recognition is key. Each near miss is a ‘click’ as the hammer falls on an empty chamber. And, of course, just because your plant has never suffered a major accident in 30 years’ operation does not mean that all the chambers are empty! If an occurrence rate is very low it will likely lie outside our experience, and our intuitive feel for the risk is likely to be correspondingly poor.
When I undertake quantitative risk assessment (QRA) work, I often have Richard Feynman metaphorically whispering in my ear: “If a guy tells me the probability of failure is 1 in 105, I know he’s full of crap!”2 (Investigating the Space Shuttle Challenger Disaster). I regard this as healthy; all those engaged with QRA should have a metaphorical Richard Feynman sat on their shoulders. Typically such extremely low probabilities or frequencies are useful as indications of relative rather than absolute risk.
cultivating appropriate management practice
There is a lot of talk and study in relation to safety cultures and the issues are well understood. The more challenging aspect is cultivating appropriate management practice. A variety of models are presented, all of which can be said to offer some value, but many of them suffer from over-elaboration through attempts to introduce increased rigour, which is often pursued for its own sake regardless of whether it actually provides net value. Rigour does not sit alongside motherhood and apple pie as a universal good. If the provisions are not broadly intuitive then their take up is likely to be limited or distorted and the wider safety management may be degraded by a loss of credibility. The moment that a management provision is perceived as ‘going through the motions’ there is diminishment.
One cultural obstacle may be the designation of a dedicated safety department; for “department” we might read “silo”. Certainly your safety resource should facilitate and provide support, but it should not have exclusive ownership of safety; safety should be ‘owned’ by the business management and the operations and maintenance functions. Whatever your role, if you discover what you believe to be a significant failing or misunderstanding that may impact on safety, you are, as a matter of personal professional integrity, obliged to bring this to the attention of the appropriate managers and make sure that it is addressed responsibly and not ‘swept under the carpet’.
This is potentially fraught territory; we must be prepared to take an unpopular stance. A process engineer of my acquaintance provides an interesting example; the plant was on course to break its annual production record with plaudits all round in prospect, but following a plant modification he recognised that the plant was behaving differently and was concerned that the modification, whilst successful from a production perspective, had introduced an increased risk potential. He refused to sanction continuing operation until enhanced protection was commissioned – the delay caused the plant to miss the record (but no one died). This takes courage.
The manner in which you raise issues will be important. An intemperate rant or attempt to ridicule transgressors is unlikely to help sell the message. A sympathetic, but sincerely-concerned intervention is more likely to get results. If we make ourselves objectionable then we risk being ostracised and losing any ability to influence things for the better. Intervention should not just be on questions of specific equipment failures and operating and maintenance practices, but also in defence of wider safety management and risk assessment practices.
We should acknowledge however that this can be overdone; unwarranted conservatism (the professional engineer’s equivalent of ‘crying wolf’) may undermine our credibility and may lead to the spurious allocation of resources to where they will achieve very little. If there are the equivalent of 100,000 chambers, we cannot reasonably object to the trigger being pulled every ten years.**
Blow the whistle – if you need to
In design and risk assessment there may well be full awareness of hazard consequence, but in day-to-day operation and maintenance, the prevailing sense may become one not of vulnerability, but rather of remote likelihood, and yet likelihood may be growing as barriers are progressively degraded through ageing and obsolescence (of both plant and people) and replacement and changes (in both plant and people), and the Swiss cheese holes grow into increasing alignment.
Although not explicitly identified in the usual models, the integrity of individual engineers is a critical requirement in the establishment and maintenance of the defences against major accident hazards. If this is not in place then all bets are off. We see increasing promotion of accreditation and certification of specific skill sets, but when we examine incident case histories, we typically find not want of knowledge or skills, but rather of diligence. This is a core component of professionalism. If all professional engineers embrace their responsibilities in this regard they can provide mutual support to help drive the culture in the appropriate direction.
If we encounter individuals who, whether unwittingly or not, are propagating errors in process safety management, we are under a professional obligation to intervene; we might think of it as ‘normalisation of deviants’! If the transgressions are systemic and wilful, and clearly hazardous, then there is a duty to blow the whistle long and hard. Silence will be construed as acquiescence, and any professional who has acquiesced in a process safety failure cannot claim to have ‘just been following orders’.
If you lay claim to the title ‘professional engineer’ you cannot turn a blind eye or walk by on the other side. ‘Not my responsibility’ will not wash. If you have reason to suspect that the trigger is being pulled more often than anticipated, or that more chambers are loaded than expected, you have a professional obligation to ensure it is investigated, even if you are not the one pulling the trigger.
* The UK’s Health and Safety Executive (HSE) identifies this as the “intolerable” boundary for individual risk of fatality.
** HSE identiies this level of risk as “broadly acceptable”.