The holy grail of requirements-gathering is getting the functional requirements right first time – something almost never fully achieved and the subject of many other articles.
The other “ities” are often nowhere near as straightforward to tie down, and inevitably there must be a series of trade-offs between them. How should these trade-offs be approached, and what is the impact on the software engineering project?
The biggest risk with these so-called non-functional requirements (probably better referred to as quality requirements) is the frequent situation where they are simply not stated, and often not even thought about. End user stakeholders have a set of expectations that are probably not even articulated in their own minds, let alone explained to the poor analyst or designer condemned to extracting them. These will often include security, performance, usability and other visible and invisible features of the required system.
Key stakeholders in any system and some of their expectations include:
Stakeholder: Development management
Expectation: Low cost to develop, utilise manpower efficiently, keep people employed
Stakeholder: Maintenance management
Expectation: Modifiability, ease of enhancement
Expectation: Short time to market, rich feature set, low cost, matching the competition
Expectation: Meet OSH, IRD, licensing and other legal constraints
Stakeholder: End users
Expectation: Behaviour, performance, security, reliability
Expectation: Low cost, on time delivery, stability, user satisfaction, return-on-investment
Notice the distinction between the customer and the end user. The customer pays for the system and has overriding decision making authority on all aspects of the project, the end user is just that – the eventual users of the system who will be at the coalface.
There are a number of different end users in any system, from management who will receive reports in various formats, to the operational staff who hit the keys, to the operational teams who will install and distribute or configure the software.
Realistically the customer will often be more concerned with the cost of the system than any other attribute, and will be prepared to sacrifice other attributes to achieve a target cost figure. This will often happen despite cries of “foul” from the users, sometimes without their knowledge, and can result in the system being blamed for ongoing operational problems.
In many projects, the customer will take the delivery of functional requirements for granted, and often identifies quality requirements as the key success criteria in a vague and loose manner. For example, “Information out of the system must be fast, accurate, and reliable”, or "Information must be easy to access". This sort of statement is both indefinable and immeasurable, and very likely to cause problems when evaluating the delivered system.
Stakeholders in most projects have different targets for what they want out of the system, and often these goals are contradictory. The analyst must find a common middle ground that all the stakeholders can at least accept – you’ll never get them all to agree.
Even within the end user group there are conflicting requirements that must be tracked down and compromises reached.
When mining for requirements it is important not to leave these unstated assumptions undetected – question, question and question again. A powerful tool for identifying unstated assumptions is to have the stakeholders prepare a set of scenarios. When this happens I expect that to follow. At every point in the scenario question what and why, and then examine the answers again – challenge everything. Any statement in a scenario that can possibly be quantified must be, where a scenario says a screen must be displayed with some details ask what details, and how quickly is acceptable. A “fast, user-friendly” system is undefined.
When undertaking a requirements investigation it is important to ensure that these individual “ities” are considered and addressed, using this list as a minimum (there are others which may or may not be applicable to your organisation/project) make sure you get a value or measure for each attribute from the key stakeholders.
A good way to approach this is to look for three values: acceptable, desirable, and ideal. For instance, the minimum acceptable performance measurement for filling a screen with details from the database may be five seconds. Anything less than that will be considered failure; the desirable time may be two seconds and the ideal is sub-second response. If the designers target the ideal they are likely to achieve the desirable and have an absolute bottom line measure in the acceptable. Having these values quantified provides the testers with a set of measures that can be used when testing the system, removing the fuzzy element so often found in software testing.
Keep in mind that from a user perspective these non-functional attributes are often the elements of the system that they will be more directly effected by, especially those related to usability and reliability. In many cases achieving good quality for these elements will result in a perception of success for the system, even if the functional requirements are not fully met, whereas a system that meets all the functional requirements and does not feel good to the users will frequently be considered a failure.
Hastie is a Wellington-based trainer in systems analysis and design and a practising software developer with over 20 years in the industry. Send letters for publication in Computerworld to Computerworld Letters.