As of June last year there were 14 large projects being monitored by the SSC, all of which have an indecent possibility of failure. This figure is likely to have risen since.
But the SSC’s monitoring duties seem poorly founded and poorly designed. They are the direct result of the recommendations of the ministerial inquiry into the failure of the INCIS project. The entire inquiry was highly controversial, and its findings no less so — it bucked the industry trend that says technology is rarely to blame in project failure.
The projects at highest risk of becoming trainwrecks are the big ones. The industry in general now shuns monolithic IT projects, because they’re too dangerous. INCIS was just one recent example, costing the taxpayer around $83 million, while Landonline’s coughing fit last year was nearly another.
But, you argue, sometimes it’s necessary to undertake a large project — not all projects are actually small. To this I would say bullshit, I’m afraid. I can’t think of any IT projects which couldn’t be broken down into a number of smaller projects, each with a lower associated risk. All that is required is creativity, and the knowledge that to do otherwise is to invite failure, shame and dishonour.
Rather than recommend some research into the causes of project failure, a bunch of obviously unqualified people sat around and decided that big projects were okay as long as you watched them. Why do I think that the members of the review were unqualified? Because they didn’t have as their first, and possibly only, recommendation, “Don’t do big projects, the risk is far too high.”
Of course, the SSC publishes a set of guidelines for managing major IT projects. But because these are guidelines, not rules, departments are free to ignore any part of them that they don’t like. While the fact that at least 14 projects are being monitored is a sign that these guidelines are being taken seriously, the first thing they should do is state that no project bigger than a couple of million dollars will be considered, and if it’s costing over a million dollars it’ll be monitored. Monitoring should check that the project actively rejects anything without proven value and is doing it to the best of its abilities at all times.
Doing its best will include getting a customer group as part of the project team. Yes, it’s hard, but if your project is worth $15 million dollars it’s a small price to pay to reduce that industry-standard 75% risk of failure. Rather than describe all the low-level requirements in detail, gather an overview from the on-site customer and simply ask the customer for more information when we need it.
Up-to-date information enables us to start building the software that much sooner and reduce the chances of miscommunication. The high-level overview would still allow us to plan and suggest costs. We can leave decisions until the last possible minute. If I design something now, in six months’ time when I implement it I’ll probably find that it’s no longer relevant. Design in extreme programming is done in small, granular steps, mostly involving the documentation of design constraints and interface details, and is the most rigorous approach to design in the industry today.
Yet another good thing would be 100% automated acceptance testing for all software. I’ve already run into one situation where this simply isn’t practical, at the Met Service. There may be others, but they’re very rare.
These steps would remove a lot of risk on big projects. Why aren’t they being done? State Services Commissioner Michael Wintringham stated in the SSC’s 2001 annual report that one of the important values for a public servant is “careful stewardship of resources”. I’m at a loss to recognise how investing in very large IT projects demonstrates this value.