Task-Centered User Interface Design
A Practical Introduction
by Clayton Lewis and John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book.
Contents | Foreword | ProcessUsers&Tasks | Design | Inspections | User-testing | Tools |Documentation |

5.1 Choosing Users to Test
5.2 Selecting Tasks for Testing
5.3 Providing a System for Test Users to Use
5.4 Deciding What Data to Collect
5.5 The Thinking Aloud Method
        5.5.1 Instructions
        5.5.2 The Role of the Observer
        5.5.3 Recording
        5.5.4 Summarizing the Data
        5.5.5 Using the Results
5.6 Measuring Bottom-Line Usability
        5.6.1 Analyzing the Bottom-Line Numbers
        5.6.2 Comparing Two Design Alternatives
5.7 Details of Setting Up a Usability Study
        5.7.1 Choosing the Order of Test Tasks
        5.7.2 Training Test Users
        5.7.3 The Pilot Study
        5.7.4 What If Someone Doesn't Complete a Task?
        5.7.5 Keeping Variability Down
        5.7.6 Debriefing Test Users


5.7.6 Debriefing Test Users


We've stressed that it's unwise to ask specific questions during a thinking aloud test, and during a bottom-line study you wouldn't be asking questions anyway. But what about asking questions in a debriefing session after test users have finished their tasks? There's no reason not to do this, but don't expect too much. People often don't remember very much about problems they've faced, even after a short time. Clayton remembers vividly watching a test user battle with a text processing system for hours, and then asking afterwards what problems they had encountered. "That wasn't too bad, I don't remember any particular problems," was the answer. He interviewed a real user of a system who had come within one day of quitting a good job because of failure to master a new system; they were unable to remember any specific problem they had had. Part of what is happening appears to be that if you work through a problem and eventually solve it, even with considerable difficulty, you remember the solution but not the problem.


There's an analogy here to those hidden picture puzzles you see on kids' menus at restaurant: there are pictures of three rabbits hidden in this picture, can you find them? When you first look at the picture you can't see them. After you find them, you can't help seeing them. In somewhat the same way, once you figure out how something works it can be hard to see why it was ever confusing.


Something that might help you get more info out of questioning at the end of a test is having the test session on video so you can show the test user the particular part of the task you want to ask about. But even if you do this, don't expect too much: the user may not have any better guess than you have about what they were doing.


Another form of debriefing that is less problematic is asking for comments on specific features of the interface. People may offer suggestions or have reactions, positive or negative, that might not otherwise be reflected in your data. This will work better if you can take the user back through the various screens they've seen during the test.





Copyright © 1993,1994 Lewis & Rieman
Contents | Foreword | ProcessUsers&Tasks | Design | Inspections | User-testing | Tools |Documentation |