When used correctly criticality analysis can provide companies with a very powerful tool for ranking their assets, prioritizing their workloads, and for managing their capital spending.
Unfortunately, in their drive to achieve these sorts of results, many practitioners have regularly misapplied criticality analysis. In fact, one could say there is a cult of criticality out there. Trying incorrectly to use some form of matrix approach to solve every part of their maintenance problems.
In some cases the results are relatively harmless, and the only negative impact is the tremendous waste of time. However, on other occasions misapplication of criticality analysis can produce results that are counter productive, dangerous and provide asset owners with a false sense of security.
I can't tackle all of the reasons why criticality can lead to these sorts of problems here, that would take a full chapter of a book, But there are some clear guides that may help avoid this int he future. .
1. Always and only at the level of the failure mode.
It is not uncommon to see "-practitioners" applying criticality analysis at the level of the equipment, assembly, or even at the level of the "principle functions". (Whatever they are)
This practice is not only uninformed, it is extremely dangerous.
You cannot know the relative importance of an asset unless you know what happens when it fails.
This means understanding all of the functions, all of the functional failures and failure modes, and all of their consequences.
Any criticality that is done without going to this level is destined to produce results that are lightweight, inaccurate, and potentially misleading.
Some great examples of Criticality analysis at work.
1. Prioritization of corrective work orders (Works arising from..)
2. The criticality matrix in RBI, risk based inspection, is always at the failure mode level.
3. Criticality analyses prior to performing a Safety Instrumented Systems project. This is relatively easy to do. Most safety instrumented systems have only one function, therefore the failure modes are relatively straight forward.
2. Never sum the answers
Comparing operational risks to safety risks is always where this sort of thinking comes unstuck. There seems to be a belief that we always go for the next highest criticality action or activity, when this is actually not true.
It is also impossible to produce anything (and I have seen a heck of a lot of these now) that truly gives you the capability to compare operational / economic and safety / environmental risks.
The tactics that is often used (erroneously) is to quantify the scores in every area of criticality, and them sum up all the criticality scores, then we are able to choose the highest, the next highest and so on. Sounds logical right? In fact, it has always been a very intoxicating argument.
But it is wrong...The results are often that low safety risks get treated before high safety risks because they also carry high operational costs, which catapults them to the front of the line.
The result? High safety being left in a high risks position.
The option...
a) Only score the highest one. The first one you come to.
As with an RCM analysis if you decide that the failure mode has an intolerable level of risk of a safety event, then that is how it needs to be managed. It's other consequences in environment or operations do not matter. Safety wins, every time.
b) Treat each failure mode according to its consequences.
So what do I do? I have an intolerable risk of a safety incident, and a failure mode with $10,000,000 attached to its failure. Which do I manage first?
Always the intolerable safety items. Then the intolerable environmental integrity elements. No need to debate, compare or work through a cost/benefits calculation.
Safety wins, get it to the tolerable levels. Then environment, get it to a tolerable level also. Then deal with the economic issues. Do not over complicate things.
Even the HSE out of Great Britain has come out against this practice.
3. Never as a filter!!! (Ever)
I have seen this applied two or three times now. Once was in the infrastructure industry of the UK, a second time in the electricity industry of North America, and third was an application of software in the mining industry.
The thinking goes something like this.....
Now that I have all my strategies (from RCM) and all my functional tests (from SIS) and all my replacement options (from say Availability Modeling) I now want to reduce all fo the activities to only those that are critical and require our further attention.
This is idiot engineering at it's best. Don't fall for this
The methodologies and approaches explained above will, for the assets they are working on, produce a safe minimum level of maintenance interventions. There is no further room for another layer of "optimization".
These types of approaches are usually developed and applied by people with only a scarce understanding of what asset management is about, and they are fundamentally dangerous. In fact, they are more likely to cause safety related incidents than an approach that does not use this foolish application of criticality analysis.
4. Prioritize where ever you can do.
I have ranted on this many times. But essentially it is unwise to use criticality analysis to determine which assets should be analysed, or which capital should be spent. Where you can it is far far better to use prioritization methods such as bad actors and AHP. (Which is fantastic by the way)
Good luck.
No comments:
Post a Comment