Doing more with less or same resources is the mantra for everyone these days and supply chain is also in the same boat. World over, there is a huge pressure to improve the performance of the supply chain of organizations and large investments have been earmarked for deploying state of the art or what do we call them as best of breed systems to measure the supply chain performance and efficiency in real-time. These investments are, in spite of the ongoing recession or low demand phase where revenues have shrunk and margins have taken a nose dive. The only reason is that the organizations have the belief that there is enough room for improvements in this area and these investments will help them realize long term benefits.
All these big ticket IT systems largely work on the principle of improving the processes, automating the transactions, reducing the cycle time and improve the collaboration between departments of the organizations as well as with suppliers and customers. The improvement targets are fixed after measuring the as-is values of the decided SCM metrics and then a time-bound plan with well defined strategy is put in place to achieve the best in class supply chain performance.
All these big ticket IT systems largely work on the principle of improving the processes, automating the transactions, reducing the cycle time and improve the collaboration between departments of the organizations as well as with suppliers and customers. The improvement targets are fixed after measuring the as-is values of the decided SCM metrics and then a time-bound plan with well defined strategy is put in place to achieve the best in class supply chain performance.
This post focuses on the last part where these systems measure the SCM metrics and accordingly a plan is made to improve those. We always spent huge money on deploying these systems but the focus on the quality of data is not on similar scale. Lot of times, it is seen that the SCM Metrics had a wrong as-is value because the data quality was not correct. What happens due to this is obvious – the targets either will be conservative or aggressive. Both of these situations defeat the purpose of the whole exercise...
Many companies tend to overestimate the quality of their data and very few have any metrics in place to measure critical data elements and, if they do measure, they seldom look at how well the data meets the needs of the user. It is time that data quality, across the entire extended supply chain, gets the respect it deserves. Understanding data quality through performance metrics is critical to developing a comprehensive plan to improve data quality and then maintain it. As they say; “you can’t improve what you don’t measure” is as true with data quality as with any other process or operation.
If we ask the users on the quality of the data, as the first step towards measuring the data quality, we will surely get some pointers and this is a good way to start. But data quality needs to be measured against what factors? Are there any metrics that help us find out how to benchmark our data quality? To answer this, let me present my opinion here - To me, the test for data to be correct or how correct, we need to focus on following factors:
- Data accuracy
- Data consistency
- Completeness of data
- Validity of data
Data accuracy is the percent of correct records or in other words, records without any errors. Data consistency is the percent of records that are consistent (identical data attributes) across data bases without duplication. Completeness of data pertains to the percent of records that have values in required data attributes and validity of data means the percent of records with data attributes that meet the field requirements and are still not expired in terms of age.
Each of these simple ratio metrics can be calculated from a sampling of data in a data base, for example, part master data or vendor master data.
To make the metrics more meaningful, the four key measures can be combined into a Data Quality Index that provides a realistic view of the impact of a company’s data quality on the user. The approach is similar to the traditional approach manufacturers have used for years to measure First Pass Yield in a production factory where each production process is measured and total “yield” or fallout of the entire process is calculated as an index. Some of you will also recognize the similarity to the Perfect Order Index, applying the concept to data quality is reasonable.
Since all 4 factors for calculating the data quality index are in percentage, the index will be product of all these and hence it will be lower than the independent values of these factors. This means while the independent values for these factors are at 90% each, the DQI (Data Quality Index) will still be just 65.61% which will tell us how much improvement we need in data quality before going in for large SCM system investments.
Data quality metrics can remove the emotion, guesswork, and politics from the equation, providing a factual basis on which to justify, focus, and monitor efforts. As an added benefit, a commitment to a measurement program provides a real indication to the organization that the data quality is important to the company.
The goal is to track adequate metrics to clearly understand the true condition of data quality relative to the business requirements, while ensuring that the measuring and reporting of metrics can be done in a timely and cost-effective manner. The volumes and types of data, as well as the availability of suitable tools, will dictate how a company executes data quality metrics.
After we are done with the data quality exercise and are sure that our DQI is at an acceptable level, and then is the time to take the next giant step of moving towards the SCM system deployment to improve the performance of supply chain.
No comments:
Post a Comment
Your thoughts are welcome