The concept was simple: two teams of two people would be given the same set of requirements, the same spec PC and the same amount of time. Each team would try their best to implement the requirements in the alloted time. To help erase variables we ran the experiment twice, once in Wellington and once in Melbourne. We used a different set of requirements in each town, but both were web-based applications, for which the database was provided. We used the same team leader in both towns but found a different person to pair with.
In Wellington I paired with Richard Hawkes, an experienced Java architect and developer from Stratex Networks, and Stuart Baird paired with Derek Watson, an experienced Microsoft developer from Intergen. The task was a simple one -- we had to create a course registration application suitable for a small college or high school, and had the morning to do it in.
Development was done in a public portion of the conference where we could be interrupted by the delegates, who were good enough to do so constantly during the morning break and at the start of lunch. Both teams used slight variations of extreme programming as a development process, with some agile modelling tacked on for good measure.
The day started badly for the J2EE team. We had problems with our development environment that took us about half an hour to resolve. This led to us not completing the requirements, and losing the Wellington round. The .Net team didn’t have these problems because they had a fully integrated development environment. Given that it only took us an extra half-hour to set up, we can assume that the impact of a non-integrated environment is trivial on a real-world project.
At the end of that morning’s competition, respected testing authority James Bach suggested that he look at the applications and give feedback to the project manager to allow him to choose a winner. Obviously, with the .Net team completing more of the application (but not all of it), they had the lion’s share of the points, and won the competition.
Things were a little different in Melbourne. I arrived with nobody to pair with. I was going to ask Scott Ambler, but we all considered that to be cheating, so instead I paired with a colleague from Software Education, Colin Garlick. Colin, an experienced C++ and OO man, has very little knowledge of Java. Stuart paired with Peter Stanski, an experienced MS developer and an MS regional director, and one of his colleagues, another experienced MS developer. So, Stuart’s team had three experienced MS developers, while I had only one experienced Java developer. The competition was already looking slanted towards the .Net team.
This time we had to build a defect tracking system, and we had a full day to do it in (this at everyone's agreement). We had to log defects, have them allocated to a staff member for resolution, and produce graphs showing defect rates for various time periods.
We had no environment problems this time so we got off to a roaring start. I noticed that during lunch the .Net team were busy beavering away at their system, while Colin and I took time off to have lunch with Scott Ambler, James Bach, Erik Peterson, Steve Hayes, Sanjiv Augustine and other industry notables. It was a great lunch, though we didn’t eat too much lunch because I’d snaffled a basketful of jam-filled doughnuts during morning break.
At the end of the day both teams demonstrated their software to the delegates in the conference theatre using the projectors so that everyone could clearly see what we had accomplished. The .Net team went first, and after a nice description of the benefits of their solution strategy they brought up the first screen, the log-on. They typed in their username and password and, boom, the system crashed, had to be taken down and brought back up again. The room was in shock and silence -- except for Colin and I, who were cheering raucously.
Eventually they got it running again, and it lurched from one 404 error screen to the next. They got the the graphing section and told us that they couldn’t achieve that on the web (although they assured us that it was possible), so they had implemented a fat-client application to do it for them -- one that would have to be installed on any client system for them to use it. And they graphed the wrong data. Their hollow excuses could hardly be heard over the sound of Colin and I splitting our spleens from laughing so hard. They kept telling us how they could access the application via a Palm Pilot, or via WAP. The project manager kept asking them to demonstrate the registering of a defect, but they couldn’t.
Of course, our application ran without a hitch, and we had completed all of the requirements -- even the graphing. It was as ugly as sin, but it was fully functional. Our graph was a horizontal histogram drawn with ASCII characters (primarily asterisks), but it conveyed the right information.
Unfortunately, within the role-playing of the project both Colin and I were fired by the CEO of Software Education, Martyn Jones. One requirement was that we should have a screen where each defect could be assigned to an employee for actioning. We realised that we could automatically assign the work to the employee with the lowest number of jobs, thus load balancing the work and speeding the workflow. Our onsite customer, Shane Hastie, another Software Education tutor, was unavailable at the time we had this idea, so we couldn’t get permission to make the change. So we asked his boss, Martyn, if it would be okay. He told us that it wasn’t, because there was no way in the system of indicating that a staff member was off sick or on holiday, and so our scheme would have jobs allocated to people who weren’t there to action them.
Fair enough, we thought, so when Shane arrived back we asked him the same question but slightly weighted the discussion towards the benefits of the automated solution. He agreed that it was a good idea. However, Martyn had overhead what we were doing, and went wild. He told us that as soon as the system was completed we were out of a job.
It’s lucky that we were only role-playing.
To keep things fair the .Net team had the same requirements change we had, so they didn’t have to develop the extra form either.
When the systems were tested by James Bach he found that the .Net system was unstable and insecure (no surprise there), and that the J2EE solution was very stable, produced really useful diagnostic information and was quite secure (again, no surprise there). The J2EE team won in Melbourne by a landslide victory.
What was the conclusion? The .Net team won one out of two points in Wellington and the J2EE team won the only point available in Melbourne. We all agreed that the Wellington experiment was too short, and so I consider the Melbourne experience to be the more valid.
So, which is best, .Net or J2EE? I think that in the more reasonable Melbourne competition the two sides showed their real abilities. Wellington showed us that the experiment was too short and that a little hitch would be enough to completely throw the team out of the competition. Melbourne remedied that and allowed the platforms to show through. Not only did my J2EE team win, but we won against a bigger team of significantly more experienced developers.
At the Agile Development Conference in September we’ll be holding another bake-off. This time we’re pitting a team consisting of a single pair of developers against a team consisting of a two separate developers -- pair programming against individual programming. I plan to win that one too.