Task-Centered User Interface Design
A Practical Introduction |
by
Clayton Lewis
and
John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book. |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |
5.5.5 Using the Results
Now you want to consider what changes you need to make to your design based on data from the tests. Look at your data from two points of view. First, what do the data tell you about how you THOUGHT the interface would work? Are the results consistent with your cognitive walkthrough or are they telling you that you are missing something? For example, did test users take the approaches you expected, or were they working a different way? Try to update your analysis of the tasks and how the system should support them based on what you see in the data. Then use this improved analysis to rethink your design to make it better support what users are doing.
Second, look at all of the errors and difficulties you saw. For each one make a judgement of how important it is and how difficult it would be to fix. Factors to consider in judging importance are the costs of the problem to users (in time, aggravation, and possible wrong results) and what proportion of users you can expect to have similar trouble. Difficulty of fixes will depend on how sweeping the changes required by the fix are: changing the wording of a prompt will be easy, changing the organization of options in a menu structure will be a bit harder, and so on. Now decide to fix all the important problems, and all the easy ones.
Thinking aloud is widely used in the computer industry nowadays, and you can be confident you'll get useful results if you use it. But it's important to understand that test users can't tell you everything you might like to know, and that some of what they will tell you is bogus. Psychologists have done some interesting studies that make these points.
Maier had people try to solve the problem of tying together two strings that hung down from the ceiling too far apart to be grabbed at the same time (Maier, N.R.F. "Reasoning in humans: II. The solution of a problem and its appearance in consciousness." Journal of Comparative Psychology, 12 (1931), pp. 181-194). One solution is to tie some kind of weight to one of the strings, set it swinging, grab the other string, and then wait for the swinging string to come close enough to reach. It's a hard problem, and few people come up with this or any other solution. Sometimes, when people were working, Maier would "accidentally" brush against one of the strings and set it in motion. The data showed that when he did this people were much more likely to find the solution. The point of interest for us is, what did these people say when Maier asked them how they solved the problem? They did NOT say, "When you brushed against the string that gave me the idea of making the string swing and solving the problem that way," even though Maier knows that's what really happened. So they could not and did not tell him what feature of the situation really helped them solve the problem.
Nisbett and Wilson set up a market survey table outside a big shopping center and asked people to say which of three pairs of panty hose they preferred, and why (Nisbett, R.E., and Wilson, T.D. "Telling more than we can know: Verbal reports on mental processes." Psychological Review, 84 (1977), pp. 231-259). Most people picked the rightmost pair of the three, giving the kinds of reasons you'd expect: "I think this pair is sheerer" or "I think this pair is better made." The trick is that the three pairs of panty hose were IDENTICAL. Nisbett and Wilson knew that given a choice among three closely- matched alternatives there is a bias to pick the last one, and that that bias was the real basis for people's choices. But (of course) nobody SAID that's why they chose the pair they chose. It's not just that people couldn't report their real reasons: when asked they made up reasons that seemed plausible but are wrong.
What do these studies say about the thinking-aloud data you collect? You won't always hear why people did what they did, or didn't do what they didn't do. Some portion of what you do hear will be wrong. And, you're especially taking a risk if you ask people specific questions: they'll give you some kind of an answer, but it may have nothing to do with the facts. Don't treat the comments you get as some kind of gospel. Instead, use them as input to your own judgment processes.
Copyright © 1993,1994 Lewis & Rieman |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |