Critical Thinking

Published: By

"WHAT'S 12 times 7?" Chances are you don't need to work too hard to answer that one. It was drilled into us as children, tables of multiplication, division, adding. It's likely that you got the answer quicker than you could think how you would get the answer. It's just there, available, a fact that you know without thinking. 

    How about "what's 127 times 17?" That's harder. And if I was to ask you that sum while we were walking along the street, chances are you would actually stop while your brain got working on the problem. You'd possibly get to an answer, and if you can do it quickly, it might be worth you having a go on Countdown. (The answer is 2,159, but I'd have to write it down to get there).  

    The examples above are some of the easy illustrations about the different ways we tackle problems, and what is referred to in Daniel Kanneman’s Thinking Fast and Slow1 as "system 1" and "system 2" thinking. In system 1, we rely on rules of thumb, experience, and our drilled knowledge to get an answer quickly. It's low energy, it's efficient, and it's right, say, nine times out of ten. In system 2, we have to think, to go back to fundamentals, to work out an answer. It's no accident that that phrase includes the word "work", as it's not easy, and we will feel tired if we do it repeatedly without practice. 

     System 1 is where we spend a lot of our time, and unless you think about it, you can quite happily go on, respond to stimuli, give the same response you expect, and blithely go about your life. It's how we manage to drive complex machines like cars, as we learn to move from thinking about every move, and every action, bunny-hopping the car around while we try to master clutch, accelerator, gears and moving forward. But with practice, it becomes natural, easy. I was lucky enough to have a long training session in driving on ice, where I spent an hour getting my car out of spins, over and over again. In real life, I've been able to correct a skid safely because my body reacted faster than I could have thought it. 

    However, while we often rely on "rules of thumb" or heuristics in our engineering lives, we have to be careful, because relying on this approach brings a risk, the risk of introducing biases. These are many and varied, but here are some that are pertinent to our engineering work. 

action biases 

A number of these come down to our over-confidence when things seem to go right. People will often underestimate how long it can take to complete tasks, and therefore don't add contingency to deal with problems when they crop up. Under pressure, the ‘band-aid’ solutions seem attractive, but don't end up dealing with the problems, rather only the symptoms. 

    Although as process engineers, we often pride ourselves on being "expert generalists", it is easy to forget that this is not a universal skill. If we find it easy to solve problems in our own area of expertise, we can focus on that, and assume that everything else is easy, without discussing issues in sufficient detail, especially when we don't understand them as well. 

perceiving and judging alternatives 

 It is surprisingly difficult to recognise we are wrong, once we’ve made a decision. We're good at spotting patterns, coming up with explanations and theories as to what is going on. What is more difficult is when we have misunderstood something. 

    For example, during early-stage production of a North Sea FPSO, the operators were struggling with a marine issue. The system was meant to be designed so that production would flow evenly to port and starboard tanks, and the boat would therefore stay level. However, flow was preferentially going to one tank side, and as a result, the operators were switching from one tank to another by opening and closing the divertor valves on the control system. Soon after one of those changes, a high pressure alarm came up on the cargo tank pumps. But since they knew they were having problems with some of the instruments, and the valve was open on the system, they over-rode the trip, and started up again. A high liquid level trip came up, so they over-rode that as well. 

    The flow backed up into the flare drum, into the glycol system, shut down the compression and blew the liquid from the flare drum out of the flare and onto the sea below. Luckily, the tanker that was coming in to pick up the first cargo had not yet arrived, so there were no injuries. However, once the dust settled, the operating crew realised that they had misdiagnosed the cause. They had all the classic symptoms of a blocked outlet, but their control console was showing an open outlet, so they diagnosed failed instruments, not valve closed. 

     A more severe version of this problem caused the meltdown of the Three Mile Island nuclear plant as the operators worked to solve a problem which was the opposite of the one they had. And there are many more similar mistakes in accident reports, both minor and major.

     While things are developing, we need to make sure that we continue to ask if we are correct, and evaluate different alternatives, rather than just following the first one and disregarding evidence that we are wrong. 

    On a similar theme, engineers are also often guilty of anchoring our world-view too firmly. A project team can continue to stick to completely unrealistic project "first oil" dates, which no one believes can be achieved. Convenient numbers are used to develop a case: often averages are used for layer of protection analysis or QRA, while the peak value might be a lot higher, and might result in a solution insufficient to deal with the hazard. Late in detail design, the design team can often switch from "management of change" to "management of no change", where no changes are accepted even if needed, or even if the final design will be less safe or effective. 

groupthink 

Some of the HAZOPs I chair are short, maybe one or two days to examine a modification. The nodes are arranged in such a way that addresses the higher risk items earlier, and it can be hard to keep the level of thinking about the hazards to a high enough level, especially when the last node is being considered and the end of the day is looming.  

    It's at times like this that I really like the way one of my clients behaves and thinks. To avoid any embarrassment, I’ll call him ‘Tom’. We've reached the end of the day, and ask "are there any other issues that we'd like to raise?" At that stage, I watch for Tom, and if I can see that he has entered deep think mode, then we wait for him. Many times, he's asked the killer question, "but what if?", and identified something new, something quite tricky, something that the rest of the team has missed. And discussing that scenario has resulted in better designs, better risk management, and often lower cost as well.  

    Tom is pretty impervious to group-think, and doesn't let the finish line ahead distract from what we are meant to be doing. It can exasperate a team to deal with a Tom-type figure in a meeting, but given a choice, I'd always have at least one of him around. 

    A variation on this is the long-HAZOP, and the fact that often there are the same chair, scribe, and technical team in the same room for a long time. We inevitably generate "norms" of behaviour and assessment, but it can be very difficult to even identify, never mind assess, where these norms are inappropriate. Often, this group-think can be minimised by rotating personnel, often the operations representatives, into the team, to keep it fresh. But it does take a pretty strong character to challenge a set of norms once they are established in team. 

egocentricism 

There was a line in the American comedy Ally McBeal where she was asked "How come your problems are the most important?" The answer was "Because they're mine!" And we're all like that.  

    We focus on our own point of view, ignoring that others don't have all the information or understanding that we have. If we don't face the consequences of our decisions ourselves, it's hard to imagine the consequences for others. In the offshore oil and gas industry, it is quite possible to have spent a long and successful career designing facilities without ever seeing the metal you have been slaving over for years, so what could be significant operations problems are not even noticed, never mind designed out.  

    Another interesting concept is the much-used "lessons learned" approach. Project teams are happy to sit down at the end of the project, publish a nice report on what went well, and lessons to be learned for posterity. The thing about lessons learned is that publishing is only half the battle. Not until the person who needs to know that piece of information has been able to receive it, understand it, and internalise it for future use could you claim "lessons learned" have indeed been learned. Before that, they are "learning opportunities", and quite weak ones at that. 

    So we are more likely not to repeat our own mistakes, than repeat the mistakes of others. We engineers are more likely to focus on significant incident causes within our own experience, rather than the highest priority incident causes. We are more comfortable interacting with others like ourselves, so we organise ourselves into single-discipline silos, or "communities of practice", and then hope that mashing together that approach with the other disciplines will give us the best integrated solution.

    A human tendency is also to reduce the likelihood of an event happening if it hasn't happened to us personally (which may well impact on HAZOP assessment of likelihood) and to overestimate frequency if it has happened. In HAZOPs (and to lesser extent LOPAs) we tend to use team view of likelihood rather than going out and gathering evidence that might tell a different story (and would also take time and money). We've seen many designs where there are group estimates of likelihood which then form the basis of a lot of SIL-rated equipment, but little post-start-up review that these estimates were correct.  

admit it, you're biased

Humans are complex beings, the product of experience, education, talent, and interactions. We can be amazingly good at doing complex things, understanding how to process raw materials into the basis for our life across the planet. And while a lot of things are being computerised, a recent study3 predicted that while there was a 99% chance that the job of telemarketer be replaced by computers in 20 years time, and even a 55% chance that commercial pilots would be no longer needed, there was only a 1.7% chance that chemical engineers wouldn't exist.  

     So we are difficult to replace by computer, but still flawed. The more you read in this area, the more you learn what we can get wrong. So rather than tackling every bias ever, I'll leave you with some homework, or opportunities to learn more. 

 

OTHER KEY BIASES

 Loss aversion. We feel losses more acutely than gains of the same amount, which makes us more risk-averse than a rational calculation would recommend. 

 Sunk-cost fallacy. We pay attention to historical costs that are not recoverable when considering future courses of action. 

 Escalation of commitment . We invest additional resources in an apparently losing proposition because of the effort, money, and time already invested. 

 Controllability bias. We believe we can control outcomes more than is actually the case, causing us to misjudge the riskiness of a course of action. 

 Status quo bias. We prefer the status quo in the absence of pressure to change it. 

 Present bias. We value immediate rewards very highly and undervalue long-term gains. 

Back to listing