Case studies in user testing

The final article from a series of four introducing user-centred web design

The real challenge of web design today is to design websites with their end users in mind. This involves designing from the point of view of “user experience”. When someone visits a website for the first time, and finds all the information they need without confusion, and can then carry out all their tasks without failing: that’s when we can say that the site has a “great user experience”.

This user experience, as we explained in the second of these four articles, is central to web design that is driven by the principles of User-Centred Design, or UCD. Your customers are your website users – and your website “works” only if they judge that it does. Experience tells us that users seldom give poor sites a second chance – so designing sites with customers’ needs as the number-one priority makes good sense.


See also: Part 3 - User experience in action

Part 2 - What it is and how to get some

Part 1 - Introducing user-centred web design


In this article we will look at some of the methods that can be used to implement UCD in real websites. In the previous article we looked at User experience assessment and Enquiry analysis: two methods that employ the “expert review”, or best practice evaluation, as the basis of their approach.

With this article we’re looking at two more methods of implementing UCD: Card sorting and Usability testing. These involve working with actual web users in test environments, observing their actions, and drawing conclusions about how to rectify any “black spots” that appear in the sites’ user experience.

Once again we’ll consider some case studies drawn from the experience of the Wired Internet Group, a Christchurch-based company that promotes the principles of UCD in the local web industry. These case studies will reveal some of the kinds of real-world usability issues that may be encountered, along with the sometimes simple steps that you can take to remedy them and give your site the user experience your customers deserve.

Card sorting

Card sorting is a specific form of user testing and a flexible tool for determining structure (or ‘information architecture’) in websites and intranets. In this kind of test, the participants are asked to think about the categories and headings under which information is organised in a website or intranet, and to show what makes sense in terms of their own tasks and goals.

Information architecture (IA) is the skeleton of the site, on which hangs the muscle, or content: which does the work. Card sorting is a powerful way of discovering what kind of IA makes sense to site users.

Participants either come up with subject headings themselves, or sort headings they are given into groups or categories. This is done by writing the headings on postcard-sized cards, and then arranging them on a table or pin-board. The method has the advantage of not using computers, so even “technologically-challenged” users can easily take part.

A card sorting test typically involves:

Planning – this includes interviews with site owners, expert reviews of existing designs and the development of user profiles or personas.

Test design – devising the sorting exercises and questionnaires.

Sorting – individual sessions with a number of participants. Often the finished card arrangements are photographed for analysis and inclusion in the report.

Analysis and reporting – card sorting requires careful analysis of all the information architectures users have proposed, in order to discover the trends and commonalities. Reports may include “wire frames” or prototype outlines of suggested structures.

We were asked to help a major business consultancy in trying to design a new intranet to serve 850 staff in five local offices. There was intense competition for “screen space” between different service lines and support sections. The project managers felt that many of the proposed structures were counter-intuitive, and would integrate poorly with staff workflow. We were brought in to act as a “circuit-breaker”.

In the end we did the card sorting with nearly 20 staff, though the clients quickly realised that we had proven results after about six sessions (just as we had warned them!). We followed this with another round in which we made on-screen mock-ups of two competing models. (One was our proposal, the other favoured by the management team).

The final result proved the management wanted far too many top-level categories (between 12 and 15 in all). Our tests showed staff found this confusing. They only wanted seven categories, corresponding to the classes of information they really used in their jobs – rather than what their managers thought they should use.

Most importantly, we found every staff member ranked the staff contact directory as one of the three most important intranet resources. The original idea was for this to be placed in the second level, hidden in a dropdown menu from the navigation bar. In the end we placed this above the navigation, in the “banner” at the top, where it was constantly visible no matter what page of the intranet was in use.

We calculated this would save every staff member two seconds every time they used the directory. Staff estimated they would use this at least once an hour. Calculated across 850 staff and this one change stood to save at least 900 hours wasted staff time per year – and this was only one example of timed saved by making the intranet interface match user needs!

Usability testing

The most effective tool in the UCD toolbox is full usability testing with real users. In this process, representative tasks – or scenarios - are devised for users to try and carry out with an existing site, or a prototype design. Their interactions with the site are observed and often recorded on digital video for later analysis.

An example of a scenario might be: “Find the page on the site that tells you the procedure for making a warranty claim”. Test participants are recruited who match the profile of the site’s intended audience – this profile might take the form of a “persona”, or description of an imaginary representative user. For example, this might be as broad as “people interested in cooking” or as specialised as “optometrists”.

Individual participants will sit at a computer and be observed carrying out the scenario tasks – usually they are asked to think aloud about the site and their reactions to it. They will also be asked some questions to gather data, such as “what are the three most important things you want to learn from this site?”

Usability testing usually includes the following steps:

Planning – this includes interviews with site owners, expert reviews of existing designs and the development of user profiles or personas.

Test design – devising scenarios and questionnaires.

Actual testing – a succession of one-on-one user test sessions, with anything between three and 10 participants, though six is optimal.

Reporting – reports may be written, verbal, or presented as a combination, such as a PowerPoint.

A major distribution company developed a B2B site to enable its clients and sales reps to place and track orders, and pay invoices online. Yet uptake of the site fell below expectations. So Wired undertook a round of testing to gather hard data about actual users’ experience of the site. A total of 15 participants (both sales reps and clients) were observed performing key tasks, such as placing an order. They were also asked questions about how they perceived the site. This process took three days.

We found users valued the intended functions of the site – but thought the site was complicated and slow to use. Observing their actual use of the site showed this was purely due to how they thought the site should be used. Many of the perceived problems had actually been addressed as the site was built – but the design of the navigation and main pages hid these solutions.

For instance, several participants asked for “short cut” tools to make it quicker and easier to make large orders. They wanted to be able to “pre-select” favourite catalogue items so they could be added to an order cart with one click. In fact, these tools had been provided, but they were “hidden” under headings that users did not find intuitive.

The tests also showed clients had no incentive to use the site. For instance, the company often made special price offers – advertised within the site – that could only be accessed by calling the 0800 sales number. So people who used the site to place orders could not obtain these savings. Test participants made comments like: “They say they want us to use the site – but they keep giving us reasons not to”. Clients often say actual user comments of this sort of can radically change their ideas about the design of their website.

The test report listed all the findings, grouped by category, accompanied by action-oriented recommendations and “next steps”. Clear guidelines were given for improving the screen design to reveal the “short cuts” in the ordering process. The recommendations also stressed the need to incentivise the uptake of new sales tools.

Before the testing process the company had no evidence as to why the site wasn’t being adopted as they had hoped, so they couldn’t even begin to address the issues. Armed with the test report, it could tell that every dollar spent on changing the site would lead directly to increased user satisfaction – which is the key to online success.

• This series of articles has introduced user-centred web design and the field of user experience studies.

Russell lectures at Christchurch Polytechnic, where he is the programme leader in the Graduate Diploma of Information Design. He consults with Wired Internet Group. Contact him at russellb@cpit.ac.nz Blog: http://www.wired.co.nz/blog/default.asp

Join the newsletter!

Error: Please check your email address.

Tags Development ID

Show Comments

Market Place

[]