Task-Centered User Interface Design
A Practical Introduction |
by
Clayton Lewis
and
John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book. |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |
2.3 Using the Tasks in Design
Back to the traffic modelling system and our sample tasks. What did we do with them after we got them? Taking a look at their fate may clarify what the tasks should be like, as well as helping to persuade you that it's worth defining them.
Our first step was to write up descriptions of all the tasks and circulate them to the users (remember, we're back in us- versus-them mode, with designers and users clearly different teams.) We included queries for more information where we felt the original discussion had left some details out. And we got corrections, clarifications, and suggestions back which were incorporated into the written descriptions.
We then roughed out an interface design and produced a SCENARIO for each of the sample tasks. A scenario spells out what a user would have to do and what he or she would see step-by-step in performing a task using a given system. The key distinction between a scenario and a task is that a scenario is design-specific, in that it shows how a task would be performed if you adopt a particular design, while the task itself is design-independent: it's something the user wants to do regardless of what design is chosen. Developing the scenarios forced us to get specific about our design, and it forced us to consider how the various features of the system would work together to accomplish real work. We could settle arguments about different ways of doing things in the interface by seeing how they played out for our example tasks.
Handling design arguments is a key issue, and having specific tasks to work with really helps. Interface design is full of issues that look as if they could be settled in the abstract but really can't. Unfortunately, designers, who often prefer to look at questions in the abstract, waste huge amounts of time on pointless arguments as a result.
For example, in our interface users select graphical objects from a palette and place them on the screen. They do this by clicking on an object in the palette and then clicking where they want to put it. Now, if they want to place another object of the same kind should they be made to click again on the palette or can they just click on a new location? You can't settle the matter by arguing about it on general grounds.
You can settle it by looking at the CONTEXT in which this operation actually occurs. If the user wants to adjust the position of an object after placing it, and you decide that clicking again somewhere places a new object, and if it's legal to pile objects up in the same place, then you have trouble. How will you select an object for purposes of adjustment if a click means "put another object down"? On the other hand, if your tasks don't require much adjustment, but do require repeated placement of the same kind of object, you're pushed the other way. Our tasks seemed to us to require adjustment more than repeated placement, so we went the first way.
This example brings up an important point about using the example tasks. It's important to remember that they are ONLY EXAMPLES. Often, as in this case, a decision requires you to look beyond the specific examples you have and make a judgement about what will be common and what will be uncommon. You can't do this just by taking an inventory of the specific examples you chose. You can't defend a crummy design by saying that it handles all the examples, any more than you can defend a crummy design by saying it meets any other kind of spec.
We represented our scenarios with STORYBOARDS, which are sequences of sketches showing what the screen would show, and what actions the user would take, at key points in each task. We then showed these to the users, stepping them through the tasks. Here we saw a big gain from the use of the sample tasks. They allowed us to tell the users what they really wanted to know about our proposed design, which was what it would be like to use it to do real work. A traditional design description, showing all the screens, menus, and so forth, out of the context of a real task, is pretty meaningless to users, and so they can't provide any useful reaction to it. Our scenarios let users see what the design would really give them.
"This sample task idea seems crazy. What if you leave something out? And won't your design be distorted by the examples you happen to choose? And how do you know the design will work for anything OTHER than your examples?" There is a risk with any spec technique that you will leave something out. In choosing your sample tasks you do whatever you would do in any other method to be sure the important requirements are reflected. As noted above, you treat the sample tasks as examples. Using them does not relieve you of the responsibility of thinking about how other tasks would be handled. But it's better to be sure that your design can do a good job on at least some real tasks, and that it has a good chance of working on other tasks, because you've tried to design for generality, than to trust exclusively in your ability to design for generality. It's the same as that point about users: if a system is supposed to be good for EVERYBODY you'd better be sure it's good for SOMEBODY.
If you're working for a small company or developing small projects for a few internal users at a large firm, the task- centered design approach may be all you need. But for larger projects, you'll probably have to work within the structure of an established software engineering procedure. How to apply task-centered principles within that procedure will vary depending on the software engineering approach used at your company. But we can give some general guidelines that are especially useful in the early stages of development.
Most large software projects are developed using some version of the "waterfall method." The basic waterfall method assumes that a piece of software is produced through a clearly defined series of steps, or "phases":
* Requirements analysis
* Specification
* Planning
* Design
* Implementation
* Integration
* Maintenance
In its strictest version, this method states that each phase must be completed before the next phase can begin, and that there's no chance (and no reason) to return to an earlier phase to redefine a system as its being developed.
Most software engineering specialists today realize that this approach is unrealistic. It was developed in the era of punch cards and mainframes, so it doesn't have a real place for considerations of interactive systems. Even in the mainframe era it was less than successful, because the definition of what's required inevitably changes as the system is developed.
Various modifications to the phases of the waterfall method and their interaction have been proposed. However, it's not unusual to find productive software development environments that still incorporate many steps of the method, partly for historical reasons and partly because the approach helps to define responsibilities and costs for various activities within a large software project. With some effort, the task- centered design approach can supplement the early stages of a waterfall environment.
Requirements Analysis
The waterfall method's initial "Requirements Analysis" phase describes the activity of defining the precise needs that the software must meet. These needs are defined in terms of the users and the their environment, with intentionally no reference to how the needs will actually be met by the proposed system.
This is exactly the same approach as we suggest for describing representative tasks: define what the user needs to do, not how it will be done. The difference is that the representative tasks in task-centered design are complete, real, detailed examples of things users actually need to do. The requirements produced by traditional software engineering, on the other hand, are abstract descriptions of parts of those representative tasks.
This is an important distinction, and we want to emphasize it most strongly:
Task-centered design focuses on REAL, COMPLETE, REPRESENTATIVE tasks. Traditional requirements analysis looks at ABSTRACT, PARTIAL task elements.
Here's an example. For a document processing system, a representative task might be to produce this book. Not to produce "a book," but to produce "version 1 of Task-Centered Design, by Lewis and Rieman." That representative task supplements the detailed partial tasks collected in traditional requirements analysis, which might include things such as "key in text" and "check spelling" and "print the document."
So if you're doing a traditional requirements analysis, you need to supplement it by collecting some representative tasks. The two approaches complement each other nicely. The traditional approach helps to ensure that all important functions of the system are recognized, while the representative tasks in the task-centered approach provide an integrated picture of those functions working together.
Specification
In the traditional "Specifications" phase of software engineering, the requirements are used to produce a description of the system that includes the details needed by the software designers and implementers. The customers -- the end users -- can then sign off on this document, and the software team can begin to plan and design the actual system. This sounds like great stuff from a management point of view, but it practice it often falls apart. Users aren't experts at reading specifications documents, and they have trouble imagining how the system will actually perform. Various alternatives to written specifications have been proposed, including prototypes and a more iterative approach to design, both of which fit nicely into the task-centered design approach.
However, even if you're still doing written specifications, the representative tasks can be of value. Include those tasks, with some details about how they will be performed, in the specification document. The big win here is that the customers will be able to understand this part of the specifications. It will also force the specifications writer to consider a complete task, which may catch problems that could be missed when single functions are considered individually.
Notice that the description of the proposed software hasn't quite reached the stage where you could do a complete "scenario," as we have defined it. Many of the details, such as the names of menu items, the distribution of functionality among dialog boxes, etc., remain to be defined. But a high- level overview of the interactions can be described, and doing this well is a test of your understanding of the users' needs.
Planning, Design, and Beyond
From this point on, the strict waterfall method and the task- centered design approach take very different paths. Many of the principles we describe can be used in doing the first pass at system and interface design, but inherent in the task-centered approach is the need for iteration: it's very rare that the first design of an interface is a complete success. Several iterations of testing and redesign are required, and that may well involve jumping backward through the phases of the waterfall method, something that's traditionally not allowed. Fortunately, the strict forward- moving method is seldom adhered to today. Most development environments recognize the need for some iteration, and that should make it possible to accommodate the general spirit of the task-centered approach.
Copyright © 1993,1994 Lewis & Rieman |
Contents | | Foreword | | ProcessUsers&Tasks | | Design | | Inspections | | User-testing | | Tools | | Documentation | |