Saturday, 13 July 2013

Changing the way we measure performance

A few years ago I became very disillusioned with the prevailing views on metrics, performance measurement and KPI's. 

Sure, I had all sorts of issues relating to the type of metrics being used and how they were being built. But it went further than that. 

I started to question what value could we possibly get from figures representing a fixed point in time, and exactly how much opportunities were we missing by doing so?

Not only that, but with the technology available why are we settling for second best in relation to how we view our metrics and how we display them?

The measures below are different. They take full advantage of existing technology, and allow us to see a fuller picture of asset performance, all with the goals of eliminating waste and improving performance. 


Availability and Utilisation


It seems pretty clear to me now that one of these measures is totally useless without the other. So much so that they should never even be included in a report without being presented together.

Why? Because 95% availability is useless if the utilisation of 43%. Then it is only a club that maintenance can use to beat the operations people up with. 

Better yet, they should be presented as both independent lines on a line graph, so that managers can see the impact of one on the other and where the money was truly being lost. 


Availability and Utilisation combine to form a new metric
Secondly, they should be multiplied together to form a new metric. Plant Effectiveness. I know this is two thirds of the OEE metric, and I know that is useful for many of you in manufacturing plants. 

But it has limited use for mobile haul truck operators, process refineries, electrical transmission and distribution, or even gravity circuits in a gold plant. 

New metrics:

  • Trended Availability and Utilisation on the same graph
  • Trended Plant Effectiveness (Av x Ut)

Bad Actors

This one always beings the best out of people. Normally because there is almost always disagreement on what to measure a bad actor on. Is it costs? Downtime? Frequency of occurrence? What exactly?

With todays technology we do not have to choose! We can have many aspects of performance presented to us in one graphic. 

The graph below was taken from an average of 5 analyses of Longwall installations performed over a two year period.

All were brownfield, operating assets, and all of them were working in very similar operating contexts. 

There are three areas of performance expressed here. 
  • The X axis is the number of events or the frequency when you look at it for a specific time period. 
  • The Y axis is the Average time per event, and 
  • The Size is the Total Time for this asset or failure over the period.
When taken together they tell you far more than any one metric or measure could tell you in one report. 

For example, the roof supports in these examples were never going to hit the big time in terms of major shutdown reasons. But because there were so many of them, even though they were small in duration, it all adds up. 

At the other end of the spectrum is the infrequent but massively disruptive failures ont he drift conveyor. 

A maintenance or operations manager armed with information like this every week or month has a very powerful tool to pass to the Reliability Engineers with the attached note "Fix this!".

Time Utilization Analysis

Time utilization models are a very powerful tool for seeing where the money is being lost. But you have to see them in a trended fashion, and you have to act on what they are telling you! (I know, big problem...)
Time utilization is a powerful view of where the money is being lost
TUM graphs tell you how the plant has been used over the past week, month, quarter, year - whatever. It is very powerful because at a glance you can see the ration between maintenance delays and non-maintenance delays, as well as the actual time that an asset is being actively used versus inactively used. 

However, what most TUM graphs tell you immediately is that you are not recording the information at a level that is of any use. For example, there are rarely events related to failures when returning back from shutdown or turnarounds. A major omission

Planned / Scheduled Ratios

Work order analysis is rarely done well and again is based almost solely on one indicator instead of several, or represents one point in time instead of over a period. 

We regularly see measures like compliance without understanding the detail for why, or what we are supposed to do to respond.

The next post in this series will include a range of metrics and measures that can be used to put a rocket under your teams planning and scheduling efforts.

No comments:

Post a Comment