"Write that down," the King said to the jury, and the jury eagerly wrote down all three dates on their slates, and then added them up, and reduced the answer to shillings and pence. Lewis Carroll - Alice's Adventures in Wonderland As CEOs strive to align business and IT strategies to make IT a competitive advantage, executives everywhere are starting to ask probing questions about their IT shops' performance. But with the need to deliver a business case for every IT acquisition and prove value for money at every turn has come a growing realisation that when it comes to performance metrics, all numerics are far from equal.
Because like the hapless jurors in Alice in Wonderland, some IT managers are working with business measures which end up applying numbers and inappropriate units to entirely the wrong concepts. And it matters. As US InfoWorld columnist Bob Lewis points out, as soon as you establish a set of performance and quality measures, you can bet your bottom dollar employees will pull out all stops to move those measures in the indicated direction.
And like well-meaning lemmings, in doing so they may well end up driving your organisation off a cliff. Lewis, a Minneapolis-based consultant with Perot Systems, says all too many business measures simply apply precise numbers and inappropriate units to the wrong concepts. That can be at least as bad as not measuring at all. "Imagine a lawn-mower manufacturer that has decided to reduce the number of defects.
Up goes a chart showing the percentage of defect-free mowers shipped each week. Week after week, the percentage edges higher as employees drive down the number of defects. That's good, isn't it?" Lewis asks. "Not really. As it turns out, the mowers exhibit two different kinds of defects: poor paint jobs and defective blades. Whereas the paint jobs are relatively harmless, the blades shatter, resulting in the amputation of customers' feet. Employees can fix the paint problem easily, so they pay close attention to it and ignore the other problem." Measurement can be a powerful tool in the IS management toolbox. Use it well and performance will improve. Use it poorly and only the measure will improve.
Equally wasteful, if somewhat less dangerous, is the tendency to expend so many resources on doing the metrics that there are none left for follow-up.
According to Penny Bannister, senior manager and Asia Pacific co-ordinator for World Class IT with Nolan, Norton & Co, too many organisations fall into the trap of benchmarking for the sake of the exercise alone, leaving the results gathering dust on a shelf.
Bannister says if you want your performance measures to translate to performance improvements, you should start with an assessment of why you are measuring. "We would say that before you even think about [doing] it you should start talking to the business, finding out what the IT objectives are, why you are measuring and which bits you should measure. And make sure you set goals," she says. Nor is it enough to measure only the technical side of performance: for instance productivity measures like the number of hours it takes to produce or develop a system, or delivery measures such as the effort spent and the amount of elapsed time taken to produce a system.
In today's environment the information systems manager certainly needs the ability to explain the im- pact of new tools and techniques in terms of delivering software more efficiently and with higher quality. Function Point Analysis undeniably furnishes the basis for providing this information. But your IT shop cannot live by function points alone. Function points will only tell you how efficient your IT shop is. It is like asking: "How fast is your car?" You must also measure effectiveness: "Can you drive your car?" "Effectiveness is best judged by the business users in combination with IT staff, and for that we use a questionnaire-based technique," Bannister says To measure effectiveness, Bannister says, you should poll business users about the functional quality being delivered by applications, and IT staff about their technical quality. It is in comparing the two that Bannister says significant issues can be highlighted. Such effectiveness measurement is very difficult, Bannister says, but even combined with efficiency measures it is not enough.
"The third element is business value," Bannister says. "An IT organisation may discover many opportunities for improvements in efficiency and effectiveness.
However they need to concentrate on improving those areas that are going to deliver the most business value. [To assess that] we would assess the strategic alignment of the business and IT, the organisational alignment and the prioritisation process within IT. "Assessing business value, the business mapping, is like asking: A"Are you driving your car in the right direction?'" Bannister says.
"Many times people will just look at efficiency, which I don't think really helps an awful lot. When you put the three together then you are taking into account your business objectives and which bits need to be efficient and which bits don't need to be efficient. It is very expensive to up the performance of a whole IT department, so you should make sure you are upping the performance of the bits that are actually adding value to your business." Nolan Norton & Co insists benchmarking and performance measurement are always most useful when business-focused. "With many of the effectiveness and efficiency assessments we carry out we like to have a business as well as an IT sponsor, as effectiveness is best judged by the business users," says Bannister.
Or as Tony Talbot, an IT manager with extensive experience in this area, puts it: "Perception is reality. If the perception of people out there is that IT is not giving them a good service, then it will be outsourced, it will be censured, or it won't be seen in the light that it needs to be seen in, as a good business partner within the company." Talbot has just moved on from a position as South Pacific IT manager for a leading manufacturer.
During his time with the company he completed a series of customer service surveys designed to help IT "understand the voice of the customer". "We need to take the time - and it doesn't take too much time if it's set up properly - to really understand what the customers think of us," says Talbot. "We judge ourselves on four quadrants: financially, operationally, projects and internal customer service. The other ones we can do very well, and that's pretty much the nature of things in most IT shops. But the fourth one is usually handled very badly, if at all, and in my perception it's the fourth one that will kill you every time." The internal survey series took the form of a detailed questionnaire distributed to all staff with access to IT. The results were input into an Excel spreadsheet to provide some averages, then entered into a Quality Function Deployment model.
A think tank internal focus group was set up to examine the results. Talbot says the process, began in 1994 and continued in the two years since, has not only helped IT find ways to improve life for internal users, but also uncovered a major discrepancy between reality and perception which was lowering the status of IT in the eyes of many users. It seems the IT department was being blamed for poor IT training, despite the fact that the training function actually reported not to IT, but to Human Resources.
"So the perception of our internal customers downgraded our overall IT perception score because of another department," says Talbot. The manufacturer is now considering setting up a National Training Mechanism to overcome the difficulty, and Talbot is a firm advocate of the customer survey process. "It's amazing the number of simple things we can do to make life easier for our customers. If you have 400 internal customers, and you can do something simple that makes life easier for them, that effect is multiplied 400 times. It becomes a remarkable effect," he says.
Analytical assets To Dean Bond, information services development manager with F.H. Faulding & Co, the only meaningful metrics assess value for money to the business. Hence all his department's measures are concentrated on Service Delivery Levels according to business unit specifications.
"They nominate what they wish to do, and their strategy, then the only thing we can do is match that strategy with ours. If they don't provide a strategy, then we might as well not be here," Bond says.
"We try and get the business companies to come up with that list of projects.
If they don't want it, we don't do it, and if it can be done cheaper by somebody else then we would seriously look at outsourcing it." In Bond's eyes his analysts, not all of them from an IT background, are his best asset. "So far as performance goes, if we do a service deal with our customer and they agree to that and sign off on it, then the maximum we can do after that is do it cheaper, or faster. If they sign off on a spec, we do exactly that spec." The other critical factor for Bond is risk assessment. Risk assessment starts with agreed objectives for the project, so that the expectations are the same from both sides. "We have to know all the constraints, whether they are political or not, and we have to then do a risk assessment. The time line is not so important. We can set a time line, and as long as it is reasonable it will be met, because every time there is a problem it will be handled by the risk assessment," says Bond. "In our experience, if you have got those three things, the rest of the project is easy." Getting Started Many organisations prefer to call on consultants to help with IT performance measures, and many of those consultants agree that while business value is the true measure, it is perfectly possible to start small. Mining and manufacturing company CRA, a relative newcomer to IT performance measures, has never before had the ability to measure the costs, effectiveness and efficiency of the IT function in general.
Information systems and technology general manager Pat Fean says as the organisation begins to assess IT performance, it is tying the work very closely into the organisational culture and performance reward system.
"We are running a pilot with Compass Research at the moment with four of our operating sites. It is a broad Compass Analysis project which is really based on effectiveness and efficiency and which covers mostly the quantitative rather than the qualitative thing - the easier to measure elements rather than the fuzzy bits," he says.
The next step will be to link those measures with some "value components" which would not necessarily be available from such a project. Fean says the project is already giving indications that have made it possible to compare business units internally. "They also give us a chance to look at the broader Best Practice from a wider perspective and an industry segment at the same time." Although the project is very much in its early stages, Fean is confident it will eventually return real business value.
"So far the view from the pilot project sites is that their record keeping isn't very flash, so it has forced them to look at that more closely, and our aim is to propagate this system across the groups as a whole.
"Because it's been done in conjunction with the business units I guess its got some credibility in its own right, and while it may not be comprehensive enough to say we've got a complete benchmark, it's certainly a fair step for an organisation like ours." Like CRA, many organisations find value in calling on the resources of a large international consultancy with a massive database of comparative information against which they can measure themselves. But as Rawdon Simon, a director with Compass Analysis points out, such IT efficiency studies are reasonably simple because they are reasonably industry independent, since they effectively measure one IT shop against another IT shop.
Much trickier are the highly involved effectiveness studies that consider business value. Such studies are necessarily industry dependent, Simon says, since the only meaningful comparisons that can be made are against other businesses in the same line. "These studies are, I would have to say, about five to 10-fold more involved than our normal IT efficiency studies, because we measure every component of the business. And one example of the sorts of interesting things which come out of these studies is how organisations which have put a lot of emphasis on IT are able to minimise the people resources within the business components."Simon says such studies are incredibly complicated, involving as they do an extremely detailed examination of every cost ratio across the business, but extremely worthwhile. "Using our methodology, which is really a comparative analysis of the ratios with like organisations, noting all their differences and analysing why the differences, we believe we would dramatically reduce the operating costs of an organisation," says Simon.
"We would be able to advise, for instance, which processes they are performing which need boosting and into which they could introduce efficiencies or eliminate altogether. Just as when we do an IT efficiency study, where we typically find opportunities for organisations to reduce their costs by 10 to 20 per cent of their IT budget, when it comes to business we find we are able to find similar opportunities except that the 10 to 20 per cent is across the entire operating cost of the organisation." And that may be just what IT shops may have to help their organisations achieve in future, if they are not to be accused of being as ineffective as the jurors in Alice in Wonderland at applying numerical analyses to the right business concepts. v