Showing posts with label KPI's. Show all posts
Showing posts with label KPI's. Show all posts

Wednesday, 28 September 2016

New ways of measuring maintenance

I have long been disillusioned by the way we measure the maintenance function.
  • Despite being a key beneficiary of technological advancement we fail to apply this to our metrics and KPI's.
  • We seem to be permanently stuck on static figures and trending common metrics like availability and utilization.
  • Our reported metrics rarely if ever contain anything diagnostic both at the strategic and tactical levels. 
For this reason I have continually tried to push the boundaries on performance management in several areas. First, by making use of modern technologies a lot better. Sometimes this is only in more advanced use of Excel, sometimes it incorporates Business Intelligence tools such as Business Objects,  QlickView, Tableau or legacy tools like Cognos and CorVu.

Second by representing and using metrics in a vastly different way. This is a pretty big theme, it crosses a range of areas from chart types, to Synergistic / Antagonistic measures, through to leading / Lagging indicators.

Tuesday, 29 October 2013

Asset Replacement Value as a benchmark

Benchmarking yourself against the leading companies in asset management globally is a tricky business. it is very easy to adopt something you read in a book without really understanding what the measure is actually showing. 

For example, take the very common Asset Replacement Value (ARV) measure. This is a measure of all maintenance costs as a percentage of the ARV.

My own measures of this metric are:

Best in class - 1.75%

Average - 2.3 %

Worst in Class - 5.3%

But on further analysis... is this measure really of any use?

Saturday, 3 August 2013

Changing the way we measure performance - Planning and Scheduling (Pt1)

Planning and scheduling is the most common set of metrics within the maintenance department, yet like many other we find ourselves stuck in the same cycle of measuring things that either provide us low value, or are easily manipulated. 

As with every other type of indicator, the goal is not to produce outstanding measures every time. This is how scheduling metrics generally end up figuring in company dashboards. 

The real goal is to be able to highlight areas of poor performance that we can improve, or to measure parts of our process to see how we are managing our workload. Unfortunately, nobody has informed the army of SAP consultants circling the globe forcing un-workable systems on companies... but that's another story. 

The metrics included in this post are not the run of the mill, and are designed to produce a specific result. 

Thursday, 1 August 2013

Changing the way we measure maintenance - Mobile Assets

Recently we spoke about how our current approach to KPI's and performance reporting leaves a lot to be desired. In particular the fact that we still represent performance as one point in time (one figure), we often misuse the metrics we do have, and we do not take advantage of advances in technology. 

Although there were a range of suggested metrics included in this post, I wanted to post some actual (Anonymous) data to try to show how powerful it is to use modern technologies for asset improvement. 

The graph presented in this post is called a Motion Graph and it is developed using Google Drive. 

Monday, 22 July 2013

Failure reduction - Targeting Analysis

I became disillusioned with Root Cause Analysis many years ago when I first bumped into Mr Bob latino from the Reliability Centre in Virginia. (Real class character and all round nice guy)

Since then I have worked a lot in this area and my continued work has sharpened my appreciation of the problems in this field.

Initiatives end up being long lists of redesigns waiting for capital/approval/time/labour and all the while the problem persists. Or we dedicate so much of our time working on things that are going to take weeks to analyse correctly. Or worse still, we end up blaming the fallible human and sending them off for unneeded retraining.

This post is the first in a series of posts on ReflexRCA. A method of to embed root cause analysis across the organization using straightforward principles and methods, and based on tools you probably have at your disposal today. 

The series will include:
  • Targeting Analysis
  • Causal Analysis (Depth of analysis)
  • Fixing failure (Resolving problems)
  • Case Studies (From a range of different industries)
We won't be talking here about issues such as implementation or justification of these programs. I am pretty sure that most companies see this as self evident. 

Saturday, 13 July 2013

Changing the way we measure performance

A few years ago I became very disillusioned with the prevailing views on metrics, performance measurement and KPI's. 

Sure, I had all sorts of issues relating to the type of metrics being used and how they were being built. But it went further than that. 

I started to question what value could we possibly get from figures representing a fixed point in time, and exactly how much opportunities were we missing by doing so?

Not only that, but with the technology available why are we settling for second best in relation to how we view our metrics and how we display them?

The measures below are different. They take full advantage of existing technology, and allow us to see a fuller picture of asset performance, all with the goals of eliminating waste and improving performance. 

Saturday, 12 January 2013

The 8 task types of RCM


About 7 years ago we started to introduce the concept of 8 Maintenance Types in the RCM training we deliver.

This was in response to my involvement in the evolving Whole of Life cost and management models throughout the infrastructure sectors of the UK at that time. (circa. 2003 - 2006 ish)


The results have been so impressive that this has become part of the story of RCM that we tell during the RCM 101 course, and is developed further with additional techniques and applications during the RCM Analyst training course. 


Some of the things to come out of this over the years have amazed even me, and the impact tends to change depending on the application within each sector, company, or even asset. 

One of these has been the total uselessness of commonly used metrics in most companies. Particularly the Planned vs Corrective work order graphs.


Another impact has been on the approach that many companies take to forecasting whole of life costs, change out dates, and for forecasting CAPEX spending way off into the future. 

Sunday, 23 January 2011

The "key" thing about performance indicators

KPI is a term that gets thrown around pretty freely these days, normally it gets used to refer to every single metric that an organization uses or can think of.

In reality it means what it says, the KEY measure to tell you what you need to know about performance, costs, process accuracy or whatever else you may need to review.

This begs the question, what would actually be the KEY performance indicator of your plant or process?

Sunday, 5 July 2009

One metric to rule them all !!


I get asked pretty regularly about metrics. Being the author of a book on the subject it doesn't surprise me...but the most common question is generally something like...
"Which metric do you recommend to give an overview of reliability improvement?"
This is almost always followed by the questioners own preference for either Availability, or OEE, MTBF or something similar.

The only answer that I can give, from my own experience, is that no such thing exists.

Tuesday, 27 May 2008

Uses of MTBF

The goal of this article is to take a slightly different view of MTBF, and to look at it in an RCM context, as well as a proactive context. This book has been two years in the writing and has been a project that has traveled with me across several continents, a couple of roles, an operation and a range of projects. It has been one of the great learning journeys of my life.

Mean Time Between Failure or MTBF is one of the most widely used metrics in physical asset management. Generally, companies use it as a guide to the performance of their physical assets, helping them to identify assets or processes that are causing lost revenue or cost related issues.
However, although widely applied, MTBF is still the subject of some confusion. Moreover, MTBF is useful for a range of different purposes, giving organizations greater ability to increase the net present value of their physical asset base.


Monday, 8 October 2007

User Adoption Metrics

Technology and physical asset management are two areas that go hand in hand. From mobile working solutions, to ERP systems, to niche reliability programs, we find almost every aspect of what we do is becoming more and more dependent on technological solutions.

Yet why is it then that often we feel as if we are treading water? Our companies spend a fortune, often in the millions, to implement the latest gadgets, gizmos and fads - but for some reason they don't make a difference?

I'm sure there are many reasons for this but one of the most dramatic is a failure to take into account the vital area of user adoption. Without making sure that users are entering AND using the system it is never going to do what we want it to do.

These are some widely applicable metrics that I have been using over the years. Note: They do not replace a comprehensive data scorecard for ongoing management, but they do give a good idea as to how effectively the implementation and take-up strategies have been.

All of these are monthly metrics, but they could also be generated with greater or lesser frequency depending on your specific requirements. Again, just some basic ideas designed to generate thought at your company. These measures are aimed at a standard CMMS style implementation.

The image above is the Eason Matrix. This shows the different levels of intervention required to get user adoption up to an effective level during the implementation of any corporate software.

It clearly shows that the difficulty in user adoption grows depending on the scale of the implementation program. A program focused on a big-bang style solution will often find itself fighting to get users to embrace the new technology. While a longer more managed implementation path will meet with less resistance and be more likely to result in effective implementation.

The selection of which method to use in adoption will depend on a range of variables, and the type of metrics and dashboards that can be used will also change along with this.

System Usage

Always the first point of call. If your users are not logging in then there is no way they are going to be good adopters of the technology. However, just because they are logged in does not mean that they are either, so I use a few additional measures.

  • Users logged in week (Out of total users with access) - Weekly
  • Users not logged in each week (Out of total users with access) - Weekly
  • Number of work orders / requests generated (Out of total users with the ability to generate work orders or requests) - Weekly
  • Routine Maintenance regimes added (Out of users authorized to add routine maintenance regimes) - Weekly (This is only useful at the beginning of an implementation)
  • Open actions per role versus past period - Weekly (Good where there are authorization processes in place)
Data Quality
This is a vital element of any adoption framework and will assist to tell you whether the correct fields are filled out, and if they are filled out with the correct data.

  • Data integrity of issued work orders / requests (All fields filled that are expected to be filled)
  • Data quality of work orders (Fields filled out with incorrect data) This is an interesting metric because it assumes that there has been some forethought when implementing the CMMS. (See CMMS: A Time Saving Implementation, available on Amazon)
  • Key non required fields filled out (measurement of the quality of data above the minimum requirements)
  • Work order versus HR comparative reports (To make sure that the figures within the work orders match the figures from HR management)
Business Performance

If the implementation has been carried out for all the right reasons then the ultimate test of effectiveness will be the changes to performance.

For a CMMS there are many areas of this but the key one is on efficiency. So measures such as the following would be adequate:
  • Schedule compliance
  • Schedule confidence - Percentage of "planned" work orders within the finalized schedule.
  • Delay reductions
There are of course many more and each implementation is different, but hopefully these provide at least some food for thought when you are looking to try and determine whether your newest technology investment is being used as you would like it to be.

Saturday, 17 March 2007

A Different point of View; Aligning your Maintenance Scorecard for Maximum Impact

Introduction

Maintainers are used to change, after all that has happened in the past 15 years who wouldn’t be? But few could have foreseen the incredible surge of attention that physical asset management has started to attract over the last 5 years in particular.

Events such as the swift adoption of PAS55 by a range of global infrastructure companies, the Energy Act of 2005 in the USA, and the recent resignation of Lord Brown of BP following the Alaska Pipeline and Houston refinery incidents, all point to one undeniable fact. Physical asset management is now a very serious business!

So much so that today we are seeing the convergence between physical asset management and financial asset management in many areas of human endeavor.

  • Financial regulators in the United Kingdom are placing an increasing focus on the rigor that is put into their plans for capital spending, a multi-billion dollar area of activity for each 5 year period.
  • Private equity firms the world over are acquiring infrastructure companies such as Thames water in the UK, BAA – the airport owner in the UK, Alta Link in Canada and practically everywhere else where there is an opportunity. Interestingly, these companies are not being stripped of their assets. Rather, they are being re-structured to maximize the income from those assets over the long term.
  • Many European governments now use their infrastructure assets, and the contracts for their management, as a means of financing public spending in these areas. A modern and innovative spin on asset management.
  • Many large estate owners now use comprehensive asset management contracts designed to shift risk of asset failure away from themselves and onto those providing the service. Another unique feature of modern asset management.
In response to this many companies find themselves wrestling with how to extract maximum economic value from their assets while ensuring their continued environmental integrity and safe operation. This is no small feat given the scale and complexity of modern asset management, and the limited funds available to it. It is a challenge that has caused problems for even the most expert of asset-intensive companies.

Without an organizational compass many find themselves implementing one initiative after another. Many of these initiatives by themselves are sound and will provide some of the benefits sought, however often they are in conflict with each other and sometimes they are actively counter productive.

The Maintenance Scorecard (MSC) provides such a compass to for the development, implementation and monitoring of strategy in a unified and consistent manner, ensuring that all of the company’s resources are focused on the corporate goals.

One of the first questions to arise when learning about the MSC process regards the creation of perspectives. What are they? Why are they necessary? And which perspectives are recommended? All valid questions and can be difficult to gain agreement on.

A balanced point of view

In the past we were always stereotyped as being a center where cost needed to be controlled. Often maintainers were involved in decision making only when things were already going wrong, and when there were costs to be reduced. As such we were viewed from two very simple perspectives by senior management, that of failure reduction and cost control, and normally in a reactive manner only.

Today, if we were to run our asset management departments with only these two over riding priorities we would definitely be limiting the potential of our assets, and potentially be managing things in an almost unethical manner.

So this is the essence of the perspectives. They provide a framework for viewing the asset base, and modern asset managers need more than one or two simplistic views of how their assets are performing.

Even today many efforts at building a scorecard end up focusing on only the one or two area related to direct performance measurement. Normally some form of measurement of uptime, and some form of measurement of direct cost spending. Forgetting the focus on value for money, as opposed to reduced spending, and often leaving safety indicators to the tock standard incident frequency indicators safety departments have been using for decades.

CEO’s of many asset-intensive companies today realize this. Recent events such as the Baker report into the Houston BP Refinery explosion, the fall out from the Hatfield and Potters Bar train crashes and rafts of new legislation globally have ensured that every senior asset manager takes a more rounded view of how assets perform.

If you look back over all of the requests for additional information from senior management, they generally fall into one of the following four categories: (In no particular order)
  1. How much are they producing?

  2. How much is it costing me?
  3. Am I getting good quality output on a consistent basis?

  4. Are we hurting anybody or damaging the environment in the process?
If we analyze these questions we can see that they provide the basis for all areas where physical asset management can provide a substantial impact on the companies responsible for managing them.

Often attempts to define perspectives go off the rails because people are focused on how we manage the assets, rather than the impact they create for us. For example; a common misconception is to try to generate indicators in perspectives that align to functions within the company. Maintenance, operations and reliability are common groupings when this is done.

There are many issues with this approach. As always we need to be conscious of the behaviors we are driving. In this scenario we have a clear cut case of pitting existing silos of activity against one another, when what we need is the realization that all areas of activity contribute to asset performance.

Another frequent approach is to use break the perspectives down into the different resources required to achieve success. A focus on human resources, equipment performance, systems, knowledge and other areas is often included. Like the approach above there is one classic flaw in this, all of these perspectives cut across all areas where physical asset management can have an impact.

So any measurement in one area can often be used in another area, confusing the scorecard and watering down its ultimate message. So what is the ultimate message of the Maintenance Scorecard?

Basically the message is to manage the performance of the assets in a balanced fashion, so that our goals can be weighted in each of the areas where assets have an impact, in other words, to provide the company with an organizational compass for managing their physical assets.

Recommended Perspectives of a Maintenance Scorecard

We have already seen how the Executive branch of many companies sees their physical asset base. We can begin to classify these into perspective as below:
  1. How much are they producing? (Productivity (1))

  2. How much is it costing me? (Cost-Effectiveness (2))

  3. Am I getting good quality output on a consistent basis? (Quality (3))

  4. Are we hurting anybody or damaging the environment in the process? (Safety (4) and Environmental Integrity (5))
These four ways to view the asset base provide a pretty comprehensive picture of overall performance. However, there is still one point of view missing, and it is an area that is often forgotten or dramatically overdone.

The perspective is that of Learning (6), and it measures how well we are developing our corporate information to power future improvements. Of all areas of managerial activity, asset management is possibly the most reliant on information to sustain good performance over the medium and long term.

In modern asset management we suffer at both ends with regard to managing information. On one hand companies either collect far too much data on their assets, often wasting a lot of effort and time in doing so, or they collect next to nothing. (And what is collected is limited in its ability to help) Much has been written on data in asset management, the fact that it is vital for high confidence decisions and of the complexities of obtaining it. But what is not often written about is that successful asset managers will actually reduce failure data, not increase it, making this area even more difficult.

On the other hand maintenance departments all over the world are leaking knowledge at a dramatic rate. Much of the experienced workers we have relied on for decades are retiring, in fact many were cut during the cost cutting that went on during the eighties and nineties, and many young people are opting for other careers other than those of engineering and asset maintenance.

So the learning perspective covers a wide range of areas, but it all revolves around the management of information. (Data + Knowledge) For example; quality and integrity of data, effectiveness of training, and codification of knowledge into usable data all represent areas that we need to focus on in order to ensure that our successes are not short lived.

So in summary, a maintenance Scorecard that provides a company with a balanced view of how its assets are performing will need to cover the following 6 areas of performance.
  • Productivity – How well are our asset performing? How well is our workforce performing? Is there any hidden productivity that we can unlock in any place?

  • Cost-Effectiveness – Are we getting the best value for each dollar spent on the maintenance effort? If not how can we lever even further value out of it? Note: This is not the same as low cost which is not a concept supported by the author. Low cost has a tendency to become high cost in the mid term, either directly or indirectly.

  • Quality – Are we delivering the level of quality required in terms of production? Are we delivering the level of quality required in terms of asset maintenance performance?

  • Safety and the Environment – Instead of the reactive measures of number of incidents, how can we proactively measure our exposure to risk of asset failure in these areas?
  • Learning – How well are we managing the information (data + knowledge) we are learning from today’s activities, in order to fuel tomorrow’s improvements?
If you have previously considered a more thorough and sophisticated method for developing, implementing and monitoring strategy in your physical asset base then I hope that this article has provided you with something useful or at least something to ponder.

If you are still developing long lists of unconnected indicators, which bear only a passing resemblance to any objectives that your organization currently has, then I hope it has opened up additional areas where you may be able to continue to improve the performance of your physical asset base.

Wednesday, 14 March 2007

Problems uncovered risk-based asset management

For language options please use the babel fish icon at the side of the page. This article was a joint effort with Dr Paul Davies Head of Global Risk Management for Lloyd's Register.

As companies move towards risk-based asset management, they need to be confident that their decisions will increase the profitability and productivity of their asset bases while minimizing the exposure to the risk of catastrophic events. According to Knowledge Based Management's Daryl Mather and Lloyd's Register's Paul Davies, this is particularly important for the chemical industry where the integrity of physical assets is a source of competitive advantage.

For chemical and process plant engineers the integrity and performance of assets is a key requirement to the safe and efficient manufacture, storage and distribution of products.

In simple terms: “It's looking after the plant, the kit, the tools we need to do the job – it's common sense, it ain’t rocket science, and we've been doing it for years.”

All true, but as we know, common sense isn't that common and rocket science certainly isn’t beyond the wit of the chemical engineer. So why is asset integrity management (AIM) moving up the process industry's agenda?


Tuesday, 13 March 2007

Leading and Lagging indicators

Performance measurement is one of the methods at the heart of propelling an organization towards breakthrough performance. This generally takes the form of performance indicators, key performance indicators, and measurement programs all designed to focus the attention on various areas of performance.

Within the Maintenance Scorecard, MSC, the approach taken is to create metrics based on desired performance levels, rather than employing some form of measurement by pick-list approach to building a metrics program.
The old adage is “if you can measure it you can manage it”. The Maintenance Scorecard takes a slight turn from this. Before you think about how to measure it, fist work out what it is you want to manage!

Regardless of the approach taken, at some stage the organization finds itself considering some of the advanced techniques within performance measurement. These include strategic theme key performance indicators, leading and lagging indicators, opposing indicators, risk-based indicators, and modern display techniques.

Within this short article I am going to try to clarify how Leading and Lagging indicators are treated within the Maintenance Scorecard, and how they can add immediate value to your companies’ performance management efforts.

What exactly are Leading and Lagging Indicators?

It pays to remember that we are talking about measuring and managing performance within this area of the discipline. So we need to directly relate these titles back to the measurement of performance.

Quite simply leading indicators lead performance, and Lagging indicators lag performance. In other words, one tells you where the performance of your assets, teams, processes, or other resource, is going to, and allows you to act in a proactive manner. While the other tells you where it has been, and allows you to take reactive action!


At first glance this seems counter-intuitive doesn’t it. How can we measure things that have happened, and think we are going to be able to predict future performance levels? The trick is to fully understand the processes you have in place, and how that fits into the rest of your day-to-day management of the physical asset base.

Some examples of Leading Indicators

So, Leading indicators allow you to take action proactively. So to truly be a leading indicator they need to predict, or provide some indication, of future performance levels and/or issues.

For example: most work order systems are managed through some form of priority rating of the corrective, or reactive, work orders in progress. This rating is often related to time and is used to determine how soon after creation the work order should be done.

It is used in capacity scheduling, ad-hoc work order execution and a range of other business processes that have to do with work management. The basis of this process is a link to time. This is done, normally, using a combination of the consequences of the failure mode if it is left unattended to, and the importance of the equipment to the company.


Within this process a performance indicator, or report, would be the Age vs. Priority Report. This report displays the number of work orders, in their respective priority groupings, that have not been completed on time. Some of these also display how late the work order is.


Figure 1: Age versus priority example



The graph in figure 1 clearly shows that a number of Priority1 work orders are between 5 days and 1 week late. In this case we don’t know what the time horizon is for Priority 1 work orders. But it is probably less than one week! If you look at the 3-week mark on this graph one or two have made it out this far. Not good!

So, what is this telling us? It really depends on the underlying work order prioritization method being used. But basically it is indicating that we are faced with a higher level of risk than our system is supposed to manage. This probably means that something is about to fall apart within the very near future.


This metric, as with any other, should be produced in such a way as to be able to drill down into the data that produced it. This would take us to the late work orders, the equipment they were raised on, the failure mode, or potential failure, that has triggered them and possibly even the consequences of them going horribly wrong.


This is the essence of leading indicators; they tell you where performance is likely to go. Things aren’t bad in the priority example yet, but it looks like they soon will be! If used correctly leading work orders can add a proactive element to what is normally a reactive activity.

Leading performance indicators are few; the best proactive measures come from a specific need within a specific company, rather than selecting from a range of “off-the-shelf” measures.
Schedule compliance (Yup, that one) is a good example of another leading indicator. (But with a twist) Normally this metric is used to evaluate how the scheduling and execution functions are working together, how the workload is being managed, and as an indicator to how much unexpected work occurred and pushed it out.

For instance: from RCM we learned that an On-condition task is scheduled to occur at a frequency less than the P-F Interval. I won’t go into why as that is a whole different area, but for the sake of this article we will take this as the principle.

Therefore there is only a limited timeframe for the on-condition or predictive task to be carried out. If the P-F Interval is 4 weeks, the frequency of inspection is, say, two weeks, and the actual inspection frequency is 6 weeks. Then we can see immediately that we are only going to predict this failure mode occurring by dumb luck!

Once is okay, we can react to that, but if the task is regularly done at periods longer than the P-F interval then the most likely outcome is that we will have an unpredicted failure on a failure mode that our analysis told us needed to be predicted.

Again, the underlying concept is a deep understanding of what it is that your processes and regimes are trying to accomplish, and the effects of these on other areas of performance. And again a standard metric can be used to give a vastly different viewpoint.
Some examples of lagging indicators

Lagging indicators are just about all of the rest. These are indicators that tell you when something has gone wrong or is in the process of going wrong. MTBF, Availability, Planned versus reactive ratios (if these are still used) are all examples of lagging indicators.

Although we have spent most of this paper on leading metrics, that these are also very important. Without lagging indicators we have no idea of the impact, good or bad, of the work we are doing on a daily basis, or of the improvement initiatives, or of recent modifications and so on.


I hope this has cleared up some issues regarding leading and lagging measurement of performance in asset management. The intention of this article was to enable you to apply these principles to your workplace immediately, so if you do, or if you can see how they would be applied. Please send me an email and let me know! daryl.mather@gmail.com

Monday, 5 March 2007

Unlocking the Hidden Workforce

After decades of cutting costs through traditional methods, further efficiency gains are either limited or physically impossible. 

This article looks at sophisticated new metrics and methods to unlock the hidden maintenance workforce in your plant.

The Maintenance Productivity Factor (MPF) was created by Daryl Mather in 2002 and is used widely in productivity audits, shutdown efficiency reviews and developing ongoing plans to improve efficiency. For information on Reliability Success services in this area please send an email.


Increasing challenges for maintainers


After decades of evolving in virtual isolation, physical asset management now attracts interest from corporate management, institutions, regulators and government bodies. 

Asset managers feel the impact of this attention in two areas:
  • First, in the increasingly sophisticated expectations. High confidence, defensible budgetary submissions to regulators, accurate whole-of-life cost forecasts for shareholders, and confidently managing asset risk to tolerable levels.
  • Second, in the increased level of pressure to increase the return on capital through traditional areas of efficiency and cost savings. (Labor and materials)
On one hand this has invigorated the interest in techniques and issues related to reliability, which is a welcome change to what maintainers have been used to in the recent past. 

On the other hand it has also created a significant issue for asset managers.


Tuesday, 20 February 2007

Leading us into better performance

Developing and implementing a measurement program for managing performance at your site is always a good idea. Regardless of how good you think you are doing, or how little downtime you actually have, measurement programs can always highlight areas of improvement and areas where you could take a closer look.

However, one of the common problems of relying on measurement systems is that it is often like driving down the highway looking in the rear view mirror. Instead of seeing where you are going you are looking at where you have been. How many times have you looked at the metrics for that month and thought “It would have been good to know about that before it happened!”
This is the fundamental problem of performance indicators as a guide to plant performance, because they work by displaying historical transactional data everything you see in a metric has already occurred, this is because for many years metrics have been seen as lagging indicators of performance.

Even though it is reactive it still gives us an insight into the causes of problems, their frequency and a range of other information that we can use for improvement. But if we wish to use metrics and measurement systems as part of a corporate asset management approach, then there is a need to use them to tell us about problems before they happen.
These types of indicators are termed leading metrics, and as the name suggest the lead performance, or tell us what is likely to happen with some aspect of performance in the future. Defining leading indicators is often challenging, and requires a totally different view of measurement and how it can assist you.

Here are some tips for developing your own leading metrics to manage your physical asset base, if you think of any additional ones I would like to hear about them so please send me an email. The underlying principle of all of these techniques and areas is that they all lead performance.

  • A new look at old metrics – Often companies are employing leading metrics without even knowing about it. A metric that I often quote is that of schedule compliance. At first glance this metric is telling us how we did in completing last weeks schedule as planned.
But if we look at it another way it can also be telling us about the level of risk that we are faced with. In the area of routine maintenance, all of the tasks we do are set at a certain frequency for a reason.
At that frequency we can be sure of capturing the early signs of failure, reduce the likelihood of an in-service failure, or ensure the risk of a multiple failure is to a tolerable level. So if they are done late, or deferred, then you know that an element of risk facing the plant has increased a little bit higher.

It doesn’t tell you exactly what and when, bit it is leading because it indicates that things could start to go wrong. As the number of missed schedules increases, so too does the risk. A report like a “Missed Schedules Report” or something similar is often of use when it is used in this way.
  • Tying in with predictive technologies – Every single application of predictive technologies is leading in nature. All of them are looking for the warning signs of failure. The signs that something very specific is about to go tragically wrong.

Try to tie in with any online condition monitoring information sources that are available and displaying them in a way that warns of potential dangers or use captured data from visual inspections to grade how likely a failure is.
  • Using predictive techniques – Weibull, RBI, and RCM are all methodologies that contain an element of predictive thinking about them. .Aside from the condition monitoring element of each of these, there is often an attempt to try to gauge remaining life and calculate risk accordingly.
There are often doubts abut the accuracy of these due to the nature of failure data in asset management, however in this instance they do not need to be 100% accurate, they just need to warn of potential dangers.
Setting up some form of standard Weibull calculator on failure data of critical assets, with a view to predicting end-of-life is not as difficult as it once was. Like other methods it could be done through modern reporting tools and a CMMS, or through dropping data into a spreadsheet of database system locally.
  • Look at the process – This is the area where probably the most proactive measures can be gleaned from. By measuring elements of the work processes in place we can get a view of when they are going wrong and use that to infer future asset performance.
For instance, most company categorizes corrective and reactive work orders with priorities. These are often linked to the severity of the consequences over a period of time. An example could be a high level of vibration on a pump calling for a replacement item. If the work order is a priority 2, say, then it could mean that if no action is taken within two weeks the risk of failure rises considerably.

Again this is not 100% accurate, but it doesn’t have to be. A late corrective work order tells us that performance could take a nose dive soon, so it provides us with an early warning system.
Another example could be a growing percentage of delay codes of some sort or other in work order reports. This could be warning us that there is a growing bottleneck in the process that is going to impact on our time to return to service.

Conclusions: Although some of these are difficult at first, all of them are achievable using even modest systems available in today’s information marketplace. I hope this has been useful for you to shake up your thinking about how metrics can be used to predict performance.

However, another point I wanted to make is the vital importance of asset data to modern asset managers.

Our area produces reams and reams of data, and if we are going to effectively and efficiently manage physical assets then there is a need for us to tie into that resource to make high confidence decisions regarding asset performance.