Task-Centered User Interface Design
A Practical Introduction |
by
Clayton Lewis
and
John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book. |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |
4.1 Cognitive Walkthroughs
The cognitive walkthrough is a formalized way of imagining people's thoughts and actions when they use an interface for the first time. Briefly, a walkthrough goes like this: You have a prototype or a detailed design description of the interface, and you know who the users will be. You select one of the tasks that the design is intended to support. Then you try to tell a believable story about each action a user has to take to do the task. To make the story believable you have to motivate each of the user's actions, relying on the user's general knowledge and on the prompts and feedback provided by the interface. If you can't tell a believable story about an action, then you've located a problem with the interface.
Here's a brief example of a cognitive walkthrough, just to get the feel of the process. We're evaluating the interface to a personal desktop photocopier. A design sketch of the machine shows a numeric keypad, a "Copy" button, and a push button on the back to turn on the power. The machine automatically turns itself off after 5 minutes inactivity. The task is to copy a single page, and the user could be any office worker. The actions the user needs to perform are to turn on the power, put the original on the machine, and press the "Copy" button.
In the walkthrough, we try to tell a believable story about the user's motivation and interaction with the machine at each action. A first cut at the story for action number one goes something like this: The user wants to make a copy and knows that the machine has to be turned on. So she pushes the power button. Then she goes on to the next action.
But this story isn't very believable. We can agree that the user's general knowledge of office machines will make her think the machine needs to be turned on, just as she will know it should be plugged in. But why shouldn't she assume that the machine is already on? The interface description didn't specify a "power on" indicator. And the user's background knowledge is likely to suggest that the machine is normally on, like it is in most offices. Even if the user figures out that the machine is off, can she find the power switch? It's on the back, and if the machine is on the user's desk, she can't see it without getting up. The switch doesn't have any label, and it's not the kind of switch that usually turns on office equipment (a rocker switch is more common). The conclusion of this single-action story leaves something to be desired as well. Once the button is pushed, how does the user know the machine is on? Does a fan start up that she can hear? If nothing happens, she may decide this isn't the power switch and look for one somewhere else.
That's how the walkthrough goes. When problems with the first action are identified, we pretend that everything has been fixed and we go on to evaluate the next action (putting the document on the machine).
You can see from the brief example that the walkthrough can uncover several kinds of problems. It can question assumptions about what the users will be thinking ("why would a user think the machine needs to be switched on?"). It can identify controls that are obvious to the design engineer but may be hidden from the user's point of view ("the user wants to turn the machine on, but can she find the switch?"). It can suggest difficulties with labels and prompts ("the user wants to turn the machine on, but which is the power switch and which way is on?"). And it can note inadequate feedback, which may make the users balk and retrace their steps even after they've done the right thing ("how does the user know it's turned on?").
The walkthrough can also uncover shortcomings in the current specification, that is, not in the interface but in the way it is described. Perhaps the copier design really was "intended" to have a power-on indicator, but it just didn't get written into the specs. The walkthrough will ensure that the specs are more complete. On occasion the walkthrough will also send the designer back to the users to discuss the task. Is it reasonable to expect the users to turn on the copier before they make a copy? Perhaps it should be on by default, or turn itself on when the "Copy" button is pressed.
Walkthroughs focus most clearly on problems that users will have when they first use an interface, without training. For some systems, such as "walk-up-and-use" banking machines, this is obviously critical. But the same considerations are also important for sophisticated computer programs that users might work with for several years. Users often learn these complex programs incrementally, starting with little or no training, and learning new features as they need them. If each task-oriented group of features can pass muster under the cognitive walkthrough, then the user will be able to progress smoothly from novice behavior to productive expertise.
One other point from the example: Notice that we used some features of the task that were implicitly pulled from a detailed, situated understanding of the task: the user is sitting at a desk, so she can't see the power switch. It would be impossible to include all relevant details like this in a written specification of the task. The most successful walkthroughs will be done by designers who have been working closely with real users, so they can create a mental picture of those users in their actual environments.
Now here's some details on performing walkthroughs and interpreting their results.
Copyright © 1993,1994 Lewis & Rieman |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |