The most common misconception when measuring the success of a data project is to say that it should be developed/delivered according to specification. All too often the specification is at best incomplete and at worst simply wrong.
Listed below are five key criteria to look at when measuring the success or the failure of a data project:
- Fit for Purpose – what must the final deliverable look like for the “fit for purpose” flag to show green? This is clearly the most important criteria. If the thing doesn’t work it’s of no use. This is where we disagree (strongly) with some of the definitions that talk about the project being delivered according to the “spec”. If the requirements specification was incorrectly defined, the final deliverable may have compromised usefulness. This in itself is a vast topic, but suffices to say that the final deliverable must be something that is fit for purpose.
- Durability – will users continue to use and trust the output for the period defined? Will they like using the system? You can’t judge a project purely at the start of the “business as usual” phase. It must be enduring. A big factor affecting durability is trust. Trust is in turn affected by data quality (sometimes referred to as Data Integrity). Although trust has a strong overlap with the first criteria (fit for purpose) it’s important enough to hold it’s own space on this blog. In my opinion trust is one of the biggest issues affecting durability of a system.
- Scalability – as the number of users and/or quantity of data increases, will the system be scalable? If new components are added, will the system still function according to expectations?
- According to budget – perhaps this is something that must be agreed at the start and cannot be viewed in isolation. Budgetary issues may need to be revisited during the implementation phase. Clearly any changes in the specification will have an impact on the budget and should be estimated and discussed before the change is implemented. Return on investment is important but sometimes difficult to measure.
- On time – again this must be defined at the start. Many projects where the final deliverable functions effectively experienced some delays, so these criteria must be used with wisdom. Sometimes there are impacts on timescales that are outside the direct control of the parties involved.
I’m not saying these are the ONLY criteria to be included in your evaluation. They are however pretty important in shattering the reliance on specification.