Testing, testing ...

Most developers don't write tests. Walk amongst your developers right now and ask to see their unit tests. If you find any then you're in better shape than most companies.

Most developers don’t write tests. Testing is often left to the quality assurance (QA) department (if you’re lucky enough to have one) or to the customer’s acceptance tests (CAT), if you’re developing that kind of software. Walk amongst your developers right now and ask to see their unit tests. If you find any then you’re in better shape than most companies.

In XP (extreme programming) we use slightly different terms to those you may be familiar with. We use the term "programmer test" where you may use "unit test", and we use the term "customer test" where you may use "acceptance test" or "system test".

Customer tests verify compliance with their requirements. In an XP project it is the job of QA to build the CATs (with suitable feedback from the customer). Sometimes your own developers fill the QA role, which is fine as long as the task is clearly separated from system development, and the customers sometimes fill it themselves, which is also fine, as long as they can keep up with the XP team.

Programmer tests are written by the team’s developers and exist to confirm to the developers that their code works the way they expect it to. They also act as an enabling technology for the all-important refactoring, and function as a safety net for evolutionary change. These are the lowest level of feedback on an XP project.

If you were lucky enough to find programmer tests in your organisation, take a good look at them. Do they require the developer to check the results against something? It isn’t a test if the developer says something like “see, if the output is a decimal then it’s passed the test”. If they say something like “green line, passed, red line failed”, then this is a test. A test either passes or fails; it’s the test software, not the developer, that makes this decision. Also, tests should run without human interaction, to remove the possibility of mistakes, and to allow the test to run rapidly. That is, a test should be non-interactive, and binary.

There should be a high volume of programmer tests. Typically an XP team will create as much test code as production code. Paradoxically, this allows us to create production code faster than any other method. Why?

Our speed is partially attributed to our lack of fear when evolving the system. Most teams practice some form of iterative development these days. This means that for most of a project we have to change what exists in order to add something new. This is evolution, and it occurs on all systems as requirements change.

When evolving a system that is saturated with tests we know rapidly when a change breaks something, and we know where to go to fix the problem. So, we can make changes with impunity. Making these changes often requires changing the tests (if we change an interface then we have to change the associated tests). The customer tests protect us from breaking the system by changing the tests.

Another part of our speed comes from ensuring that everything in our code is done just once. We call this the Once and Only Once (O&OO). This saves us a massive amount of time when evolving a system. If something needs to be changed then there is at most one place to make that change, and our tests tell us if we broke anything.

The xUnit frameworks support programmer testing of this sort. The first of these frameworks was JUnit, an open-source testing framework for Java, written by Kent Beck, and Eric Gamma (www.junit.org). Now we have C++Unit, VBUnit, HttpUnit and many more. Whatever you’re writing there is a testing framework available. And, they’re all open-source, which means low costs, and great support (and I promise that they won’t give you cancer).

The xUnit suites provide a simple framework for writing non-interactive, binary, tests. On GUI systems they produce a green bar when all the tests pass and a red bar, with a problem report, if they don’t.

We write our programmer tests before we write any production code. We start by writing a single test that won’t even compile, and then the production class that allows it to compile. Once it compiles we write the code that allows the test to pass. Once it has passed we add another test, watch it fail, and again write the code that makes it pass. We continue like this throughout the project. It’s called Test First Development, and is one of the core practices of an XP team.

Programmer tests make the programmer focus on results. If a programmer’s current task is to make a test pass, then they’ll probably not waste time creating a pretty icon for a button, unless that helps. That is, programmers tend not to gold plate software if they’re doing test first development.

Programmers want their tests to pass, and it gives them instant feedback as to the quality of their work when they do. Each passed test creates a little jolt of satisfaction for the programmer. This is addictive, after a couple of weeks of doing this most programmers feel lost without it.

If you’re serious about quality you should be doing both programmer, and customer tests. If you’re serious about quality you should also be doing refactoring, but we’ll discuss that in depth in the next column.

Dollery is a Wellington IT consultant. Send email to Bryan Dollery. Send letters for publication in Computerworld to Computerworld Letters.

Join the newsletter!

Error: Please check your email address.

Tags extreme programming

Show Comments
[]