How to turn around failed data projects

Leopard have successfully taken on several data projects that were abandoned by a previous supplier or considered a failure.

The first thing to ask is: What is a failed project? 

If you search the web for “definition of a failed project” you’ll hit thousands of opinions. These can generally be distilled down to a few criteria for that undesirable red-rag label:

  1. The project was not delivered according to plan, expectations or what was required
  2. Benefits not realized (return on investment for example)
  3. The project was terminated early by the customer due to a variety of factors such as untenable client-supplier relationship, bankruptcy, etc.
  4. Budget overrun
  5. Time overrun

Whilst we partially agree with all of the above, the Budget and Time criteria are a bit iffy. Although Budget and/or Time overrun can have devastating effects, generally they change the RAG status (red, amber or green) of the way the project was implemented but may not have a negative impact on what was finally implemented. In fact, often an increase in budget/time will increase the likelihood of delivering something that is fit for purpose. There can be a direct relationship. Time and Budget issues may affect return on investment if the overrun is very high, in which case the project could be a failure. In actual fact, the same pass/fail criteria cannot be used for all projects. Each project is unique, and prior to commencement the pass/fail criteria should be established. The requirements should be clearly defined, and should drive the commercial engagement terms.

Prior to commencement the pass/fail criteria should be established

We at Leopard believe there are five possible criteria that can be used to define, at the start, how the project will be judged as a success or as a failure:

  1. Fit for Purpose - what must the final deliverable look like for the “fit for purpose” flag to show green? This is clearly the most important criteria. If the thing doesn’t work it’s of no use. This is where we disagree (strongly) with some of the definitions that talk about the project being delivered according to the “requirements”. If the requirements were incorrectly defined, the final deliverable may have compromised usefulness. This in itself is a vast topic, but suffices to say that the final deliverable must be something that is fit for purpose.
  2. Durability - will users continue to use and trust the output for the period defined? Will they like using the system? You can’t judge a project purely at the start of the “business as usual” phase. It must be enduring.
  3. Scalability - as the number of users and/or quantity of data increases, will the system be scalable? If new components are added, will the system still function according to expectations?
  4. On budget - perhaps this is something that must be agreed at the start and cannot be viewed in isolation. Budgetary issues may need to be revisited during the implementation phase. Clearly any changes in the specification will have an impact on the budget and should be estimated and discussed before the change is implemented. Return on investment is important but sometimes difficult to measure.
  5. On time - again this must be defined at the start. Many projects where the final deliverable functions effectively experienced some delays, so these criteria must be used with wisdom. Sometimes there are impacts on timescales that are outside the direct control of the parties involved.

Note: The criteria above clearly have some overlap and inverse relationship. Clearly a project could be delivered exactly according to the specification on time and to budget even if the stakeholders know it will not be fit for purpose. Likewise, the team could overrun both time and budget to deliver something that will serve its purpose well but with decreasing return on investment.

During the building phase the pass/fail criteria should be re-visited

During the building phase additional requirements or issues may arise. The plan should not be so rigid as to not allow some flexibility. Examples of this could be:

An additional benefit may have an impact on the specification that could increase time and cost by 20% (pick a figure from the air) but increase the return on investment by say 50%

Data integrity issues may cause severe delays. This could have resulted in an incorrect estimation of how severe the problem would be - it was simply a guess because at the time it could not be measured

Additional requirements required for the system to be fit for purpose - hmmm … controversial, but there is no point in delivering a system that isn’t fit for purpose

Why do data or business intelligence projects fail?

There are generally four things that derail a data or business intelligence project. First, the wrong technology was chosen for the needs of the business. Second, stakeholders and users were not effectively engaged in the development of the solution. Third, the solution design did not work for the business. Lastly, people don’t trust the output (relating to both the second and third points). Clearly there are strong overlaps between the above, and these overlaps are areas that also require careful monitoring.

image01

*The explanations that follow include vague references to actual projects. For obvious reasons we have chosen to keep the identities of the clients and the projects anonymous.

1) Technology Choices

1.1 - Making the right choice

When it comes to technology there is no one-size-fits-all. Every organisation is different. Virtually every organisation has already invested in some technology and this may drive choices. One of our projects was for Microsoft itself, clearly we didn’t recommend an Oracle product. BP on the other hand uses a variety of technology, and each project will differ in terms of the technology chosen. More recently the BIG DATA drive has caused more angst when it comes to software choice. Why the angst? Simple: fear of backing the wrong horse. Unlike business intelligence - that is in a pretty mature state - big data still has many new faces appearing and the big names are almost frantic to get ahead of the pack.

One of the failed projects we inherited was amazing: the vendor built an early prototype using Access 2000 with a VB 6 front end (this was in 2011). At the time, both products were already out of support. The intention was to deploy this solution to several hundred users in a global organisation - it was never going to happen.

Some projects have failed over the long haul when the technology used is discontinued. In the “good old days” a database compatible compiler called “Clipper” was outstanding. Who could have known it received the kiss of death when it was bought by a larger company?

When it comes down to it, technology is the easiest thing to get right. Although choices can be confusing, the modern approach of separating the layers makes the “wrong choice” less of an issue. Any respectable company will allow potential customers to trial their product. This process, while time/resource consuming, helps thin-out the list of ‘correct’ technologies for a business fairly quickly. Of course, some elements of the solution should use tried and trusted players (e.g. Microsoft SQL or Oracle as the back end for relational data, Hadoop for big data).

1.2 - How do you turn a bad choice around?

In general it’s important to get the basic foundation right. In the case of a data project this includes things like your data engine of choice. If you get this wrong, the cost to get it right will depend on a number of factors. For example, let’s say you chose some “unknown” data engine and bought a bunch of ETL tools, DI tools etc. You may find the cost of moving to one of the popular engines to be acceptable given availability of support, skills etc.

The bottom line: it depends on the cost and your budget. In general, you can’t get refunds or sell your licenses.

In the case where Access 2000 and VB6 were chosen in 2011 it was a no-brainer: change to MSSQL with a variety of front ends. The organisation already had MSSQL installed and were comfortable and had skills in the technology.

2) People and Teams

2.1 - Effective engagement and teamwork

This is where it gets much more interesting. Effective engagement has an impact on so many things, a few being:

    • Solution design
    • Data trust
    • System ownership
    • Data ownership
    • Evangelism of the system

We can hear what you’re thinking: surely this is the business analyst and the project managers job? Of course it is. It’s also the technical delivery manager’s job. It’s each team leader’s job. Only through thorough engagement with the future users will you guarantee their trust in the solution and thus avoid roadblocks further down the line.

2.2 - Why do we need people to have ownership?

    • To build the right solution
    • To help with the change management
    • To trust the output
    • To own the input
    • To trust the input
    • To evangelise the solution
    • To use the system
    • To become SuperUsers

The bottom line is that when people-engagement is done badly or not at all, sheer luck will be required to build the right solution. Even then, essential people may not feel they have ownership.

2.3 - Project Management

Something must be said about project management tools and methodologies. They can assist, but they won’t turn a bad project manager into a good one. Getting this right is a skill you’re either born with or have acquired. I’ve seen project managers with all sorts of Prince2 credentials get it wrong.

2.4 - The Engagement Plan

Potentially the biggest mistake made when drawing up the engagement plan is actually the lack of engagement planning. The second mistake is that only end users of the systems are considered in the engagement plan. This mistake means that critical people/teams (e.g. contributors such as long term data owners or providers) are not part of the solution thus the solution design will have holes (e.g. not be trust-based). This is why there must be an engagement plan in the first place!

2.5 - How do you turn a sub-optimal engagement plan around?

Due to politics this may have to be worded carefully, but the bottom line is that a “phase 2” engagement plan should be drawn up and executed. Learn from the mistakes, include all relevant stakeholders in the plan, make the plan available.

2.6 - Training and Rollout Plan

Another key element for success. This must fit in with the organisations doctrine or “ways of working”.

3) Solution Design

3.1 - Bad Solution Design

I guess this is simple: design it badly and suffer the consequences:

    • People don’t like to use it, usability is important
    • They don’t trust it
    • The don’t look after the data (i.e. garbage in garbage out)
    • Even if it worked pretty well at the start, it soon becomes shelved
    • The solution might not work at all
    • “Fit for purpose” = fail

The problem with solution design is that it requires experience, gut feel and savvy. The myriad of tools to help design something may be useful but they don’t turn a bad solution architect into a good one. You either have it or you don’t. Experience of lots of projects also helps, both successful and failed ones.

Note: the architect may not be at fault for the original design, there could be a number of factors contributing to the problem (lack of knowledge at the time, budgetary constraints, etc)

3.2 - How do you turn a bad solution design around? 

The answer is simple: get someone who knows what they are doing. Engage with experts to check the solution design, a bit like a second opinion. Hopefully, the architect will be able to work with the bad solution and shape it into a better solution for the business’ requirements. Worst case scenario, they have to start from the ground-up and the project is scrapped because the resources are not available anymore.

4) Trust in the Data/System - DATA QUALITY

4.1 - Why is trust so important?

This is, in our experience, one of the biggest factors to contribute to the success or failure of a project. It’s also seldom spoken about. Sometimes people talk about “Data Integrity”, but in reality this is only one of several facets of overall trust in the system. Every single BI system we have implemented over many years has been affected by trust. There are two areas for concern:

    • Integrity of the inputs and mapping (linkages)
    • The black-box area

It goes without saying that if the inputs are bad, the integrity of the outputs will be adversely affected and people will soon learn to distrust the information. The problem is that data follows the usual path to disorder. If “left to itself”, it tends to a state of chaos. In our experience, it is essential for people to be responsible for data. In essence:

    • Data must be owned by someone
    • Their job should depend on the quality and availability of the data
    • They should be provided with the right tools to monitor and care for their data
    • Data integrity should where possible, be measurable and trended over time to clearly show whether it is improving or degrading.

4.2 - What is the black-box?

The black-box issue happens when data is transformed into useful information and where the end user is not able to verify or understand the conversion process. This becomes particularly acute when:

    • The user doesn't like what he/she sees (especially if the picture painted differs from what was hoped or expected)
    • Data changes without effective explanation
    • Errors are corrected and the final output changes

Clearly both the inputs (Data Integrity) and the transformation and aggregation of data (black-box) are affected by the solution design, the engagement with people and teams and the technologies used to mitigate these risks. For example, we were required to design a solution that would perform very complex calculations on data,  and then to aggregate the data before it could be deemed useful information. The organisation in question used Excel extensively, so we designed a report that would take one of the many data components and show how it would all hang together. In this particular system there were a series of data integrity reports that build on each other and together with the Excel output users could not challenge the black box. They could have challenged the calculations and aggregations themselves, but this promotes healthy dialogue and increases trust in the information being presented. Even when they didn’t like what they saw, they still accepted the output. In another system we designed, there was neither time nor budget to create similar “trust tools”. The result, the system is no longer in use: killed, in this case, by both data integrity and black box.

4.3 - How do you turn a trust issue around?

The good news is this is generally easily achievable if budget is available. The approach is three-fold:

    • Engage with data owners and ensure they have the tools to look after their data
    • Ensure all systems are in place to make their job depend on the data (if possible)
    • Mitigate black-box effect by providing users with the ability to follow data end-to-end and to understand how the aggregations and transformations are performed

Clearly each of the points above include aspects of technology, people and teams and solution design.

Final thoughts and comments

When all is said and done there are a number of key people that help to make a data project a success. A few of these deserve a mention:

    • The Chief Solution Architect is key to all four pillars (technology, people and teams, solution and trust). The person/s filling this role will require experience and savvy. There is no course for this, and there really is no shortcut. They either have it or they don’t.
    • The Project Manager is also key, and in the case of most data projects projects managers need to have strong technical skills. A project manager who openly admits to not understanding the technical side of things might not be appropriate for a data project.

Project management methods, if effective, help to ensure a smooth project delivery. However, they must be considered a tool only. This statement may be controversial: in our experience, very few project managers are actually effective. Many of them have risen through the ranks in development teams and have eventually moved or been promoted to a position for which they are not suited.

Finally, there are other issues and facets of a project to be considered such as:

    • Doctrine (ways of working)
    • Personnel
    • Finances
    • SOP (Standard Operating Procedures)
    • etc

This will all need to be considered but can form part of the four pillars covered in this article.

One thought on “How to turn around failed data projects

  1. This paper has broken down how and why projects fail. This provides for an interesting read. it covers scintillatingly the fundamentals that are missed and leads to project failure

Leave a Reply

Your email address will not be published. Required fields are marked *

*