Category: Software User Interface

  • Software Evolution: User Interface Concepts: Whose Perspective, Who’s in Charge??

    This post dives more deeply into the issue of Conceptual UI evolution as introduced in this post. Understanding UI conceptual evolution, which in practice is a spectrum, enables you to build applications that have UI's that produce dramatically better results than the competition — getting more done, more quickly and at lower cost.

    Whose perspective?

    The least evolved UI concept looks at things completely from the point of view of the computer – what do I (the computer, which is really the person writing the application) need to get my job done? In this concept, the UI job is conceived as the computer getting things from the user, and protecting the computer from the user’s mistakes. This was, at one time, the prevailing concept for user-machine interactions, and remains surprisingly widespread, although few people would admit to thinking this way today.

    At the other end of the spectrum, the software designer looks at things completely from the point of view of the human user – what do I (the human user, which is really the person writing the application) need right now and what can I do? In this concept, the UI job is conceived as the human getting things from the computer, directing it to do things, and presenting the user with options, possibilities and help that are as close and immediate as possible to what the user probably is trying to do.

    Obviously, the technical side of UI’s has played a role in what’s possible here. In the early days of computers, we were glad to have them, and decks of cards and batch processing were way better than the alternatives. Computer time was rare and valuable; people were cheap by comparison; so it just made sense to look at things from the computer’s point of view.

    The equation reversed long ago. Most computers spend most of their time idling, waiting in anxious anticipation for a user to do something, anything, just GET ME OUT OF THIS IDLE LOOP!! Sorry. That was the computer in me breaking out. Normally under control. Sorry.

    Now, it’s entirely feasible to construct user interfaces entirely from the human’s point of view.

    Who’s in charge?

    There are some cases where the purpose of a program (and its UI) is entirely to be at the service of the user, and there are essentially no external constraints or advice to be given to the user. However, in a wide variety of practical cases, there are lots of people whose concerns need to be reflected in the way the computer is used. At one end of this spectrum, the user is in charge. If the user is in charge, and if we want to make sure the user does a certain thing under certain circumstances (think of a customer service call center environment), we give the user extensive training, and all sorts of analysts look at the results, so that certain customers are responded to in certain ways under different circumstances. We monitor the user’s calls, look at what they entered into the computer, and we work on changing what they do using group and individual meetings, training sessions, etc. All our effort is focused on the user, who clearly controls the computer; if we want things to be different, we go to the center of power, the user.

    At the other end of this spectrum, the computer is in charge, in the sense that all major decisions and initiatives originate in the software. After basic how-to-use-a-computer type training, the user needs no training – everything you want the user to do is in the software, from what they should work on next to how they should respond to a particular request. Everyone who would have tried to influence the users directly now tries to put their knowledge into the software, which then applies it and delivers instructions to users as appropriate. When this concept is taken to the extreme, the human operator is little more than a complex and expensive media translation device, getting information the computer can’t get directly, and sending information to places the computer can’t send to directly.

    So what does this mean in reality? It varies from application to application, but the net effect is always the same – the computer operator needs little training in how to respond to customers under different circumstances, because that information is all in the software. The operator mostly needs to learn how to take his cues and direction from the software, which provides a constant stream of what you might think of as “just in time training.” The user has no way of knowing if what he’s being asked to do or say has been done by many people for years, or is a new instruction just for this unusual situation.

    This approach enables a revolution in how organizations respond to their customers. It makes complete personalization possible. It enables you to respond one way to a high-value customer in a situation, and another way to a low-value customer in the identical situation.  It also enables you to make nearly immediate, widespread changes to the way you respond to customers because you have a central place to enter the new “just in time” instructions, and don’t have to go through the painful process of building customer service training materials, training the trainers, getting everyone into classes, only in the end to have inconsistent and incomplete execution of your intentions.

    While I’ve discussed this concept in terms of a call center application, exactly the same idea applies to the system interacting with people directly.

    The system is really in charge

    Let’s understand this way of building a UI with the “system in charge” a little better, since many people are unfamiliar with it.

    The first step is to put all the real knowledge about how an operator should respond to which situation in which way into the system, and to enable changes to be made at will. The next step is to change operator/user training so that they understand how to map from the unstructured interactions they have to the choices presented by the system; normally, you have to train them to do this and to know how to respond. Finally, you can provide a set of pre-recorded inputs to the operators and capture their responses, to give them practice in applying their training in real life before they are inflicted on actual people.

    Instead of thinking about the UI itself, think about the training that is normally required to get people to use an application, monitor their use of it on an on-going basis, and finally to make changes to the application and how people use it. You can start by thinking of the training as being like a wizard mode of the client, but with a training/case-based spin. Your trainers could build a big branching tree of what people on the other side of the phone can say, and how we should respond. All the content would be supplied by the training/customer service group. This would operate as the default mode of the application, until an operator has “passed,” and optionally beyond.

    On one part of the screen could be a list of things customers can say to us. For each item, there would be one or more variations, not identical to the text, that would be recorded. In “pure” training mode, the application would randomly pick one, and the PC would play the recording. The operator would pick the item on the list of potential customer sayings that he felt was closest, and the system would then provide a suggested reply for the operator to give, and (if appropriate) highlight a field and give the operator a directive to interact with that field. This would continue cycling until the transaction was completed, abandoned or otherwise ended.

    In “assisted” training mode, the list of customer requests, suggested replies and field highlights would remain, with the customer role being provided by a trainer or by real customers doing real transactions. In this case, a recording could be made of the conversation between operator and customer for additional checking or potentially for dispute resolution.

    Obviously, the application needs to be extended to provide this framework, and then to download the content. But the advantage is completely realistic, integrated training. If we make changes to the application, we can automatically throw clients into this training mode to give them “just in time” training on the new features.

    For what it’s worth, this is not a new idea. For example, operator training is a big issue in large-scale call center environments. Over the years, the best of those environments have evolved from classroom training to videos, to on-screen help to something like what I've described, which is now the state of the art in large scale call centers and is supported by major vendors of call center software. It’s the best because it’s completely grounded in reality and an extension of the actual software they have to use. The power of the approach shows most clearly in post-training changes and updates. Obviously, with it this integrated, it’s pretty easy to direct a client in training to a training server and check the data he’s entering.

    Now that voice bots are becoming available, this approach to building UI's is all the more important and valuable. In any case, optimizing the work of the human is always in order, as spelled out in detail in this post. This post gives a detailed description of the huge project at Sallie Mae in which I played a part in the 1990's. It describes the 10X gains that can be achieved by taking optimizing the UI seriously.The main principles of human optimization in the UI are largely ignored by UI designers, making things many whole-number-factors less efficient than they could be in many cases. Amazing but typical. I've gone into just how and why this "I'd rather be stupid and get crappy results" to building software in general and UI's in particular in this post, in which I also describe the personal evolution that led me to the thoughts.

  • Software Evolution: User Interface

    User interfaces have gone through massive evolution since their first appearance in the 1950's. Lots of people talk about this. But not many separate the main two threads of UI evolution: technical and conceptual.

    The technical thread is all about the tools and techniques. Examples of elements in the technical thread are the mouse, function keys, menus, and graphical windowing systems. Advances in the technical thread of UI evolution are created by researchers, systems people and systems makers, both hardware and software. People who build actual UI’s generally have to use the tools they’ve been given.

    The conceptual thread of UI evolution is about thoughts in the heads of application builders about what problem they’re trying to solve and how they’re supposed to go about solving it. Application builders are generally taught the base concepts they are supposed to use, and then usually apply those concepts throughout their careers. But all application builders don’t have the same thoughts in their heads. The thoughts they have exhibit a clear progression from less evolved to more evolved. It is interesting that the way application builders think about what job they are supposed to do is almost completely independent of the tools they have, i.e., the technical thread. Yes, they can and do use the tools available to them, but this conceptual thread of UI evolution rides “above” the level of the technical tools.

    The evolution of UI on the technical side is widely discussed and understood. As hardware has gotten better and less expensive, the richness of the interaction between computer and human has increased, with the computer able to present more information to the user more quickly, and with immediate reaction on the computer’s part to user requests. For the most part, this is a good thing, although people who think only about user interfaces can make serious product design mistakes when they fail to put the user interface in the broader context of product design. For example, generally speaking, pointing at a choice with a mouse is better than entering a code on a keyboard, and giving users lots of control through a rich user interface is better than giving them no control. However, in situations where there are repetitive tasks and efficiency is very important, the keyboard beats the mouse any day of the week, and in situations where tasks performed by humans can be automated, it is far better to have the computer do it — quickly, effectively and optimally — rather than depending on and using the time of a human being, regardless of how wonderful his UI may be. This post goes into detail with examples on this subject.

    Conceptual UI evolution, by contrast to evolution on the technical side, is not widely discussed and not generally understood. Understanding this evolution enables you to build superior software by creating software that enables tasks to be accomplished with less human effort and greater accuracy.

    UI Concepts

    The conceptual level of user interfaces is most easily understood by asking two questions: (1) whose perspective is the primary one in the mind of the application UI builder – the computer’s or the user’s; and (2) to what extent is the user relied upon to operate the software correctly and optimally? The most primitive UI’s “look” at things from the computer’s point of view, and, somewhat paradoxically, rely almost entirely on the user to get optimal results from the computer. The most advanced UI’s “look” at things from the user’s point of view, while at the same time imposing as little burden of intelligence and decision-making as possible on the user.

    When you state it this way – a UI should be user-centered and should help the user to be successful – you may well assume that building UI’s in this way would be standard operating procedure, and that building UI’s in any other way would be considered incompetent. Sadly, this is not the case. Like all the patterns I describe in my series on software evolution, most people, companies and even industries tend to be “at” a particular stage of evolution in the subject areas I describe here; companies gain comparative advantage by taking the “next” step in the pattern evolution earlier than others, and exploit it for gain more vigorously than others.

    Some of the patterns I've observed in software evolution just tend to repeat themselves historically with minor variations. Other patterns, of which this is an example, don't seem to be as inevitable or time-based. This pattern is much like the pattern of increasing abstraction in software applications, described in detail here. Competitive pressures and smart, ambitious people tend to drive applications to take the next step on the spectrum of goodness.

    For UI, the spectrum can be measured. The UI that requires the least time and effort by a user to get a given job done is the best. That's it!

    Do UI experts think this way? Is this a foundational part of their training and expertise? Of course not! Just because computers are involved, no one should be under the illusion that we live in a numbers-driven world. For all the talk of numbers, people are more influenced by the culture they're part of, and generally want validation from that culture. Doing something further up the UI optimization curve than is customary in their milieu is nearly always an act of rebellion, and most people just don't do it.

  • How to Design User Interfaces for Heavily-Used Software

    There is lots of knowledge about software user interfaces — standards, models, experts and the all rest. But there's a problem: there is no difference, the way things are now, between designing a UI for someone using a piece of software for the first time, and someone who uses it over and over. This results in astounding waste of time for the heavy users of software. It's long since time to fix this glaring hole in UI theory!

    Most UI’s are built to optimize the initial experience of the person using it. The assumption is made that the person is using the software for the first time and needs to have a good experience. Otherwise the user will reject the software, not use it again, and it might feel bad! So the software needs to be “user-friendly.”

    But what if the people using the software use it for a good part of the day, every day? If we can make them just 10% more efficient, then that’s worth something. So what if they need some training at the beginning? And the fact is, in many cases we’re looking at way more than 10% improvement. In large categories improvements of 50% are available, and in some large-scale cases, substantial whole-number factors of gain.

    Since everyone thinks they know how to build user-friendly software (if only more of it were!) and since most programmers don’t even think about high-use, high-productivity software, here are the main principles of building a user interface whose users will spend a great deal of time using, and who want to get more done in less time:

    • Arrange the work to minimize movement
      • The first and most important step is to eliminate is anything that takes the human’s eyes or hands away from the computer. Paper is a prime example – having an image of paper on the screen is vastly more productive than having a physical paper.
      • Given that eyes and hands are on the computer, the next step is to eliminate the use of the slow input device – the mouse or touch pad. Everyone loves the mouse. They think requiring its use is the most user-friendly thing you can do. But we’re talking about productivity here, and in any productivity race between mouse and keyboard, the mouse is a sad, distant last. So with the possible exception of logging in and out, just lose the mouse. Don’t compromise or be “nice” about it.
      • Now that all inputs are keyboard inputs (and they are, aren’t they???), reduce the number of keystrokes to the bare minimum. You would be amazed, when you count keystrokes (yes, you should actually count keystrokes; yes, you), how many can be eliminated in the average application.
      • Finally – don’t laugh – minimize eye movement. It’s not the time, it’s the attention.
    • Arrange the work to minimize thought.
      • Sometimes, like when people are writing something, they just need to think. That’s OK.
      • But any other kind of thinking is just a waste of time. Think about your own thought-free actions; I hope typing is a good example. When you have to look at the keyboard or think about it at all, instead of just typing, what happens to your rate of typing? Does it improve as a result of the thought? I didn’t think so.
    • If some training is needed at the beginning – OK; if training is needed on an on-going basis, you are probably making the user do things the computer should do – instead of training, automate.
    • Embodying domain knowledge is really tempting. You should make your system so that the users don’t need domain knowledge. It’s made particularly hard because you typically need domain experts at the beginning to make sure your system is sensible. They want to see the way they think about the problem embodied in the system.
      • A good example is working with health care forms. There is a huge amount of domain knowledge involved in doing this right. But the vast majority of this knowledge can be put into the system, so that the people end up just doing things that only people can do.
    • Arranging the UI and the people so they know what they’re doing and why is really tempting – resist that temptation! The more your users just do their work, focusing just on productivity and accuracy, leaving the rest to the “system,” the better off everyone will be.
      • A prime example of this is QA. In most systems, the users know whether they are doing the work for the first time or checking someone else’s work. This knowledge is built into the user roles and the queuing system. It’s true that in some cases this can’t be avoided, because of the nature of the work. But you’re well advised to avoid it, if you possibly can.
      • What does role hiding look like? An example is in heads-down data entry. In the early days, experienced people would look at what was supposed to be typed, look at what was actually typed, and check for errors. It turns out that it was faster and more accurate to simply have the “checker” enter the data as though for the first time – and it always was the first time they were entering it. Then the system would compare the results, and flag an inspection or a third keying to resolve the conflict. This came to be called “blind double-key entry,” and remains the gold standard for productivity and quality in the industry.
      • Why does role hiding work? When someone knows what someone else thought the outcome or result should have been, it influences them, one way or another, and it takes them time. It’s like when you suspect a teacher of giving biased grades or you want to know how good a food is. You get the most objective results by giving the tests to second teacher for grading because the (anonymous) other teacher got sick (or something), or you give the food for testing without labels of any kind. Remember the famous tests of Pepsi vs. Coke? Coke drinkers would always prefer Coke when it was labeled as Coke – but when two colas were unlabeled, most would prefer Pepsi, even the Coke loyalists.

    Building a UI that optimizes the productivity of the people who use it is a new and different way thinking for many software folks. But it's well worth pursuing — the results speak for themselves, and the professionals who use the UI will appreciate being able to get their work done in less time.

Links

Recent Posts

Categories