Task-Centered User Interface Design
A Practical Introduction
by Clayton Lewis and John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book.
Contents | Foreword | ProcessUsers&Tasks | Design | Inspections | User-testing | Tools |Documentation |

5.1 Choosing Users to Test
5.2 Selecting Tasks for Testing
5.3 Providing a System for Test Users to Use
5.4 Deciding What Data to Collect
5.5 The Thinking Aloud Method
        5.5.1 Instructions
        5.5.2 The Role of the Observer
        5.5.3 Recording
        5.5.4 Summarizing the Data
        5.5.5 Using the Results
5.6 Measuring Bottom-Line Usability
        5.6.1 Analyzing the Bottom-Line Numbers
        5.6.2 Comparing Two Design Alternatives
5.7 Details of Setting Up a Usability Study
        5.7.1 Choosing the Order of Test Tasks
        5.7.2 Training Test Users
        5.7.3 The Pilot Study
        5.7.4 What If Someone Doesn't Complete a Task?
        5.7.5 Keeping Variability Down
        5.7.6 Debriefing Test Users


5.1 Choosing Users to Test


The point of testing is to anticipate what will happen when real users start using your system. So the best test users will be people who are representative of the people you expect to have as users. If the real users are supposed to be doctors, get doctors as test users. If you don't, you can be badly misled about crucial things like the right vocabulary to use for the actions your system supports. Yes, we know it isn't easy to get doctors, as we noted when we talked about getting input from users early in design. But that doesn't mean it isn't important to do. And, as we asked before, if you can't get any doctors to be test users, why do you think you'll get them as real users?


If it's hard to find really appropriate test users you may want to do some testing with people who represent some approximation to what you really want, like medical students instead of doctors, say, or maybe even premeds, or college- educated adults. This may help you get out some of the big problems (the ones you overlooked in your cognitive walkthrough because you knew too much about your design and assumed some things were obvious that aren't). But you have to be careful not to let the reactions and comments of people who aren't really the users you are targeting drive your design. Do as much testing with the right kind of test users as you can.

HyperTopic: Ethical Concerns in Working with Test Users

Serving as a test user can be very distressing, and you have definite responsibilities to protect the people you work with from distress. We have heard of test users who left the test in tears, and of a person in a psychological study of problem solving who was taken away in an ambulance under a sedation because of not being able to solve what appeared to be simple logic puzzles. There's no joke here.


Another issue, which you also have to take seriously, is embarrassment. Someone might well feel bad if a video of them fumbling with your system were shown to someone who knew them, or even if just numerical measures of a less-than- stellar performance were linked with their name.


The first line of defense against these kinds of problems is voluntary, informed consent. This means you avoid any pressure to participate in your test, and you make sure people are fully informed about what you are going to do if they do participate. You also make clear to test users that they are free to stop participating at any time, and you avoid putting any pressure on them to continue, even though it may be a big pain for you if they quit. You don't ask them for a reason: if they want to stop, you stop.


Be very careful about getting friends, co-workers, or (especially) subordinates to participate in tests. Will these people really feel free to decline, if they want to? If such people are genuinely eager to participate, fine, but don't press the matter if they hesitate even (or especially) if they give no reason.


During the test, monitor the attitude of your test users carefully. You will have stressed that it is your system, not the users, that is being tested, but they may still get upset with themselves if things don't go well. Watch for any sign of this, remind them that they aren't the focus of the test, and stop the test if they continue to be distressed. We are opposed to any deception in test procedures, but we make an exception in this case: an "equipment failure" is a good excuse to end a test without the test user feeling that it is his or her reaction that is to blame.


Plan carefully how you are going to deal with privacy issues. The best approach is to avoid collecting information that could be used to identify someone. We make it a practice not to include users' faces in videos we make, for example, and we don't record users' names with their data (just assign user numbers and use those for identification). If you will collect material that could be identified, such as an audio recording of comments in the test user's voice, explain clearly up front if there are any conditions in which anyone but you will have access to this material. Let the user tell you if he or she has any objection, and abide by what they say.


A final note: taking these matters seriously may be more than a matter of doing the right thing for you. If you are working in an organization that receives federal research funds you are obligated to comply with formal rules and regulations that govern the conduct of tests, including getting approval from a review committee for any study that involves human participants.


Copyright © 1993,1994 Lewis & Rieman
Contents | Foreword | ProcessUsers&Tasks | Design | Inspections | User-testing | Tools |Documentation |