Tips for Usability Professionals in a Down Economy


The usability profession is experiencing the current economic downturn just like everyone else. This article offers ten tips for usability professionals trying to weather this economic storm:

  1. Be More Efficient with Your Usability Tests
  2. Get More Data with Less Work
  3. Deepen Your Usability Skills
  4. Broaden Your Other Skills
  5. Demonstrate Business Value
  6. Keep up on Technology
  7. Keep Tabs on Competitors
  8. Maximize Your Visibility
  9. Compare Design Alternatives
  10. Don’t Re-invent the Wheel

Specific suggestions and examples are provided for each tip.


The economy stinks. Regardless of where you live in the world, you’ve probably been impacted by the current economic downturn. Most of us know people who have lost their jobs or even their homes. This is probably the worst downturn that most of us have experienced in our lifetimes.

The usability profession certainly isn’t immune to these problems. Many people are looking for jobs in the field, either because they were laid off or they’re just starting their career. And many others working in the field want to make sure they keep their jobs and continue to grow.

I’ve worked in this field long enough that I’ve been through some significant downturns before. I’ve seen strategies that people used in previous downturns to improve their chances of finding or keeping good jobs in this field. Some worked and some didn’t. The purpose of this essay is to share some of the strategies that I think are more successful. I’ve structured them around a “Top 10” list of tips for usability professionals in a down economy.

Tip #1: Be More Efficient with Your Usability Tests

We’re always trying to improve the efficiency of our work, but that’s even more important when resources are tight. Many of us work in companies where staff has been reduced and budgets cut, but we’re still asked to keep up with the same demand that we had before the cuts. Traditional usability testing, whether in a lab or remotely over the phone, tends to be pretty time-consuming. Very useful and important, but not very efficient. It tends to be time-consuming both for the usability people running the test sessions and for any project team members observing them. The following are several ways that you might be able to improve the efficiency of this process:

  • Staggered test sessions. One technique that can help improve efficiency for the observers, if you can do it, is to schedule overlapping and staggered participant sessions. For example, assume that the start-up time for a participant session, including the briefing, informed consent, session instructions, etc., is about ½ hour. And the main part of the session (interacting with a prototype, doing tasks, giving feedback) takes another ½ hour. So the idea would be to start one session say at 9:00 and then another, parallel, session at 9:30. The observers would then “tune in” starting at 9:30 to watch the main part of the Participant 1 session, switching to the Participant 2 session at 10:00. This could continue, with the observers experiencing very little downtime during the day. Of course this requires more than one usability person to facilitate the parallel sessions, and it requires the technology or facilities to support two simultaneous sessions. This isn’t really any more efficient for the usability people, but it does allow you to get the testing done in about half the time, and the entire process appears much faster and more efficient to observers and other business partners.
  • More tasks, not more participants. I’ve lost track of the number of times I’ve been asked how many participants you really need for a usability test. We find that our business partners are usually the ones pushing for more participants in a test. (“What, you want to test with only 6 or 8 participants? How can that possibly be valid!?”) Tight times are when you especially want to push back on the desire to test with 15 or 20 participants when you’re confident that 6 or 8 is sufficient. One piece of evidence you can point to is the analysis by Lindgaard and Chattratichart (2007) of the data from Comparative Usability Evaluation-4 (CUE-4, Molich & Dumas, 2008). They looked at the results from nine teams that conducted independent usability tests of the same Website. They identified the superset of usability issues from all of the tests. Then they looked at the correlations between the percentage of that full set of issues for each test and the number of participants in the test. There was no correlation. But there was a significant (positive) correlation with the number of tasks used in the test. More tasks uncovered more issues, while more participants didn’t. One technique you can use to get greater task coverage is to identify a core set of tasks that every participant will be asked to do and another set of tasks that you would select from for each participant. Selection could be done randomly, by some rotation, or by the participants choosing the ones most relevant for them.
  • Put the observers to work. A final tip related to traditional usability testing is to make the observers do some work. A technique that’s becoming popular is to ask the observers to write down usability issues on sticky notes as they observe them. Then between sessions they post them on a white board, grouping similar issues if they have time. At the end of the sessions, the usability person could facilitate a session where the issues are further grouped, refined, and even prioritized. These can then be easily incorporated into a final report or presentation that includes additional information from the sessions (e.g., task completion rates, subjective ratings, other information that may not have been obvious to the observers). Obviously this technique works best when all the observers are in the same room. (We often provide a room for the observers to gather in even when the sessions are remote.)

Tip #2: Get More Data with Less Work

That may sound like an oxymoron, but it doesn’t have to be. The key is collecting data from users online, via the Web. This data collection can take on many forms, including various kinds of online surveys, online card-sorting studies, and online usability tests. Using the following techniques it’s possible to get data from hundreds of users in a matter of days.

  • Online surveys have been around for quite a while. If you’re not familiar with the tools for creating, administering, and analyzing online surveys, you should be. (You can find a list of some of the popular online survey tools at Many of these tools provide a free version, although they usually limit the number of respondents. Many of the paid versions are still quite reasonable (e.g., 1,000 respondents per month for $19). What you can do with these tools is mainly limited by your creativity and ingenuity. Online surveys can be useful in just about any phase of the development of a new product. Early on, they can be used to get input from target users about desired features and functions, or what they find confusing about an existing application. During iterative design and prototyping, online surveys can be used to get quick feedback about design alternatives. And once an application or Website is deployed, you can provide a link to an online survey for feedback.
  • Online card-sorting is one of my favorite techniques for getting input from users about how an application or Website should be organized. (A list of popular online card-sorting tools can be found at The basic idea is to present users with a set of virtual cards that contain brief descriptions of the functions or pages of a new Website or application. The users then sort the cards into what they perceive as the logical groups and then name those groups. Most of the online tools provide some analysis techniques, such as hierarchical cluster analysis, to help you get a better understanding of the groups that the users created. A variation on this technique involves presenting the users with the cards as well as the names of groups to sort them into. This is a great way to compare different candidate information architectures for a Website.
  • Online usability testing has the potential to transform the way we do our work, at least for those of us involved in Web design. (See for a list of some of the tools for online usability testing.) Online testing is not that different from traditional lab testing, both involve representative users doing realistic tasks using a prototype or live site. The difference is that with the traditional test you’re directly observing each participant, while with an online test you’re indirectly observing via automated data-collection methods (task timing, clickstream recording, etc.). Automated data collection allows you to collect data from far more participants than you ever could in a lab setting. We routinely do online usability studies where we get data from over 1,000 participants in just two days. Why would you want data from these large numbers of participants? Mainly because it allows you to easily make comparisons between alternative designs. We just finished an online study comparing two subtly different designs for an online account-opening wizard. Participants were randomly assigned to one design or the other. With over 400 participants per condition, we were able to reliably detect that one design resulted in a 2.5% higher completion rate than the other. Even just a 2.5% improvement in the completion rate can have a major impact on the bottom line.

None of these online data collection techniques can completely replace working directly with users, as in a traditional focus group, interview, contextual observation sessions, or lab usability test. Some things can’t be learned without directly interacting with users. But online data collection can help complement and even guide those sessions.

Tip #3: Deepen Your Usability Skills

Some usability people get “stuck in a rut,” always using the same methods for every project. If the only tool on your tool-belt is a hammer, everything starts looking an awful lot like a nail. Employers value people who have an array of methods at their disposal and who apply the right ones to each situation.

None of us knows everything there is to know about usability. Techniques keep improving, new tools and techniques are constantly being developed, and our knowledge base keeps growing. The following are some ways of enhancing your usability skills:

  • Read a usability book you haven’t read before. New usability books are coming out all the time. Just last year, new books or editions came out on moderating usability tests (Dumas & Loring, 2008), improving the usability of Web forms (Jarrett & Gaffney, 2008), usability testing (Rubin & Chisnell, 2008), and usability metrics (Tullis & Albert, 2008). That’s not even counting a host of new books on design. Start a “usability-book-of-the-month” discussion group at your company or in your local area.
  • Take advantage of free or inexpensive Web-based seminars, such as the virtual seminars from User Interface Engineering ( or the Webinars from Human Factors International (
  • Attend relevant conferences, especially the UPA conference. Unfortunately, many organizations cut back on conference travel in tight times. If that happens to you, try to make some kind of deal to still attend (e.g., agree to make a presentation summarizing key things you learned at the conference; pay your own travel if they will pay the registration).
  • Consider going back to school. Is it necessary to have an advanced degree to succeed in the usability field? Absolutely not. But in tight times, an employer looking to hire a usability person is probably going to favor a candidate with a Master’s degree over a candidate who doesn’t have one, other factors being similar. There’s an excellent list of graduate programs on the Human Factors and Ergonomics Society Website:
  • Keep up on publications in the field. There’s an ever-growing number of publications in this field. Try to keep up with at least some of them, such as this one, the Journal of Usability Studies ( If you can’t attend the annual UPA conference, get the conference proceedings on CD (available from the UPA store). Subscribe to free newsletters that summarize recent usability findings, such as the UI Design Newsletter from Human Factors International (, Usability News from the Software Usability Research Lab (SURL) at Wichita State University (, UIE Tips from User Interface Engineering (, and Jakob Nielsen’s AlertBox (

Even more important than just learning about some new usability techniques is actually applying them. Watch for appropriate opportunities to try out some of the new tools on your tool-belt.

Tip #4: Broaden Your Other Skills

Tight economic times usually cause employers to ask their employees to take on responsibilities they might not have otherwise. Usability people who can step up to broader roles than they had before are likely to be more valued. Similarly, someone looking to hire a new usability person is more likely to be impressed with a candidate who can also do some prototyping, for example. I’m not saying that you need to become a Renaissance person, but that you might want to “spread your wings” a bit. Broadening your skills in one or more of the following areas might be appropriate:

  • Skills closely allied with usability and user experience work. This could include prototyping, graphic design, information architecture, application development, accessibility, or a host of other related areas. Maybe this is the time to take that class in Dreamweaver or Photoshop that you’ve always wanted to take. Then try to find some opportunities to practice some of your new skills.
  • Project management and team leadership skills. People who can effectively oversee and coordinate multiple aspects of a project, including schedules and resources, are highly valued. Closely related, but not the same, is being able to effectively manage people or lead teams. Organizations are becoming more and more “matrixed,” where individuals have potentially several different managers or team leaders that they are responsible to. Someone who can work well in this kind of environment, and can lead teams where the team members may not actually report to them, is highly sought-after.
  • Business or subject-matter skills. For most of us, usability is not our business. It’s not the main thing that our company or organization is all about. In my case, that business is financial services. I’ve known some usability people who believe they can do an effective usability test without knowing much about the business context or subject-matter of the thing they’re testing. I disagree. While you don’t have to be an expert in that business, it helps if you have a basic understanding. And it helps even more if you have something beyond a basic understanding. Similarly, having a more in-depth knowledge of business in general can be very helpful. Maybe it’s time to get started on that MBA degree you’ve thought about.
  • Presentation skills. It’s hard to over-emphasize the importance of presentation skills for a usability person. Most of us have learned that writing a 50-page usability report and tossing it over the wall to the project team is not the way to really have an impact on the project (except, perhaps, on their heads!). So we now do some form of presentation as a way of summarizing and highlighting our key findings and recommendations. Being able to put together an effective presentation and being able to deliver it effectively are both critical skills. Some good books are available to help you make better presentations (e.g., Duarte, 2008; Reynolds, 2008), but the best way I know to get better is to do lots of presentations and to study good presenters. Take every opportunity you can to present and watch others present (e.g., within your company, at local and national UPA meetings). One technique we’ve started using at my company, mainly as a way of honing presentation skills, is to give people exactly 7 ½ minutes to make a presentation on some specific topic.

Tip #5: Demonstrate Business Value

Tight economic times make it even more critical to demonstrate the business value of your usability work. At least two steps need to be taken to do this: (a) measure and (b) convert the measurements to something that senior managers care about. The following are some tips for each of those steps:

  • Measure. Some usability people seem to be allergic to statistics. They think they’re going to break out in a rash if they calculate a 90% confidence interval. One of the reasons my colleague, Bill Albert, and I wrote Measuring the User Experience was to show usability people that statistics aren’t something to be afraid of. You can’t demonstrate true business value without using some kind of measurement. It can be as simple as calculating a task completion rate (with a confidence interval). Take the example I mentioned earlier of an online study comparing two slightly different designs for an account-opening wizard. We were able to demonstrate that one design resulted in a statistically higher rate of task completion (opening an account) than the other—2.5% higher, to be exact (± 1.2%). That’s an example of a usability measurement that senior management will pay attention to.
  • Convert to business value. Many people assume that business value is always expressed in monetary terms, but that isn’t always the case. Find out what metrics matter to the project you’re involved in or to the senior management of the company. Many senior managers have “scorecards” or other sets of measurable objectives against which their performance is assessed (potentially impacting their compensation). Find out what some of those criteria are and see if you can relate your measures to them. For example, many senior managers have a customer satisfaction goal, as measured by a specific survey of randomly selected customers. Try to relate self-reported measures from your usability studies to these customer satisfaction ratings. Similarly, many managers have a prospect-to-customer conversion rate goal for their Website. Usability metrics like the task completion rate mentioned earlier for online account opening are relatively easy to relate to these goals (e.g., “With this design, we estimate that 2.5% more prospects who start the account-opening process will complete it”).

The bottom line is that you need to speak in terms that matter to the business sponsors or senior managers involved with a project. For more information about calculating the business value of usability work, see Bias and Mayhew (2005).

Tip #6: Keep up on Technology

It’s never been particularly easy to keep up with the changing pace of technology. But have you noticed that the pace of technology change slows down when the economy takes a downturn? When was the last time you read an article hyping the latest great new “Web 2.0” application? Probably not very recently. So this one is actually easier to do now. And employers still value people who keep up on the latest developments, especially any that may have real value to the company.

What kinds of technology changes should you try to keep up on? That depends on the type of work you do. For those of us who work in the Web or software arena, some of the developments you might want to keep up on include the following:

And this isn’t even counting a variety of mainstream tools that can also be used for prototyping, including PowerPoint, Visio, Photoshop, Illustrator, Fireworks, and Dreamweaver. It’s always helpful for a usability person to be able to represent their ideas in some kind of prototype, even if it’s just a static mockup. For brief reviews of sixteen prototyping tools, see Wilson (2008).

  • Web trends and techniques. This is admittedly a very general category. But if you don’t know a portal from a blog, a wiki from a Facebook app, or RSS from CSS, then you probably haven’t been keeping up. Do these kinds of trends really matter to a usability person? Some do and some don’t. How many people remember the Internet Appliances being hyped back in the 1990’s? (Yes, I actually did a usability test of one of them.) The problem is that you can’t always tell which trends are going to matter and which ones aren’t. And the main way that some of these trends end up mattering to a usability person is that the ones which really catch on end up shaping our users’ expectations. Five years ago, when we would conduct usability tests of, rarely would a participant turn to the site search feature. Now, it’s often their first choice. We call that the “Google factor.” In fact, I’ve lost track of how many times I’ve heard a usability test participant say something like “at this point I’d just Google it.” Maybe in a few years (or sooner!) we will be hearing participants say, “at this point I’d just Facebook it.”

Tip #7: Keep Tabs on Competitors

Just as important as keeping up on technology is keeping tabs on your competitors. I don’t mean spying on them, but making use of publicly available information. Your competitors operate in the same space that your company does, which means that you can learn from them—from their mistakes as well as their successes. In my job, our direct competitors are other financial services companies. But we have indirect competitors too, such as banks, credit unions, and insurance companies. Finally, anyone operating on the Web is at some level competing with the big guys: Amazon, NetFlix, Facebook, Google, YouTube, Yahoo, Flickr, etc.

Keeping tabs on your competitors may be easier than you think. The following are some of the techniques that you might want to consider:

  • Industry reports. Several organizations publish regular reports analyzing Websites in various areas, including usability and customer satisfaction (e.g., Forrester Research, Although some of these reports can be rather expensive, check to see if your company has a subscription that provides access to the reports. And some reports are free. For example, Foresee Results ( freely publishes the results of their customer satisfaction surveys of Websites (using the American Customer Satisfaction Index, or ACSI, methodology). They issue a quarterly update of information about customer satisfaction with U.S. government Websites, and annual reports for eCommerce and online retail Websites.
  • News alerts. It’s amazingly easy (and free) to keep tabs on key competitors using news alerts. These can be set up on major news sites and aggregators, including Google ( and Yahoo ( Just choose the keywords you want to use and news articles matching those keywords will be emailed to you regularly. You can also use this technique to keep up on the usability field, but be aware that a keyword as general as usability will result in lots of articles.
  • Conferences. Local and national conferences or meetings can be a good opportunity to learn about your competitors. This is true for usability conferences like UPA, but also for conferences specific to your industry. Most of us are justifiably proud of the work we do and like to present it to our colleagues when we can. And the opportunities for informal interaction can be invaluable.
  • Competitors’ Websites. It’s rather obvious that you can learn from what your competitors are doing by watching their Websites. Some usability teams carry this a step further and do regular usability studies of their major competitors’ Websites in comparison to their own. In most cases it’s possible to design a competitive usability study where users are asked to do exactly the same tasks on each site. With appropriate usability metrics, this is a prime opportunity for getting data that provides clear business value, including directions for future work.

Tip #8: Maximize Your Visibility

Some people think that in tough economic times, with companies cutting back on staff, their best strategy to survive staff reductions is to lay low and hope nothing happens to them. I don’t think so. Layoffs are business decisions. Employees who are seen as adding business value to the company are more likely to survive, and thrive, than those who aren’t. If managers don’t know what you’re doing, you’re not likely to be seen as adding business value.

Some ways you might consider for maximizing your visibility include the following:

  • Invite senior managers to observe usability tests. Most managers want to know what their customers are doing, how well the company is meeting their needs, and how satisfied they are. There’s a potential drawback to this approach, however, because some senior managers may only attend one or two sessions, and they might get a biased view. That’s why you also want to invite them to a debriefing meeting or presentation summarizing the findings.
  • Develop a “lessons learned” database. In most usability tests, there are usually some things you learn that apply more broadly than just to that particular product. Try to abstract these more general findings from your tests and organize them in some manner. And then make them available to the rest of your company, probably via your intranet. A wiki might be a reasonable way to get started and then easily evolve it as you learn new things.
  • Keep others informed about what’s going on. Some of the things you might want to let others know about include upcoming usability tests, highlights of recent usability tests, new lessons learned, and pointers to relevant usability findings and news from the outside world. The appropriate mechanism for conveying this kind of information will largely depend on the culture of your company. Printed newsletters have fallen out of favor in many companies because of the perceived or actual cost. They’ve largely been replaced by email newsletters and posting on the intranet. At some companies (including ours), the intranet homepage can be personalized by each employee by adding or deleting “bricklets” of information. For example, we do a weekly update of a “Usability News” bricklet that over 1,500 employees have chosen to put on their homepage.

Tip #9: Compare Design Alternatives

You get much more out of a usability study if you can compare design alternatives. While this may not sound like a way to be more cost-effective, I think it is in the long run. Too many design teams get locked in to one basic design solution early on, then they just fine-tune that, perhaps through iterative usability testing. This is what Bill Buxton (2007) calls “getting the design right,” which he contrasts with “getting the right design.” By starting down one design path too early, you may very well miss other significantly different designs that are much better. But many design teams often will resist pursuing alternatives, perhaps because of the pressure of schedules and resources. So you might have to be the “evangelist” for comparing alternatives.

Some of the kinds of comparisons you might want to evangelize include the following:

  • Comparison to the old design. Rarely are we testing something totally new, for which there isn’t an old version or an old way of doing it. I think any significant new design project should start with a baseline usability study of the current design. Then new versions can be compared to it.
  • Comparison of significantly different designs. Following Buxton’s admonition to “get the right design” usually means convincing design teams to come up with radically different design solutions to the same problem. Low-fidelity versions (e.g., sketches) of these different versions can be used in usability studies to get feedback about what works for users and what doesn’t.
  • Comparison to competitors. As mentioned earlier, sometimes it can be very enlightening to do usability studies of your competitors’ Websites. This can also be true as part of the iterative design of a new product or Website. If you’re designing a new Website for building virtual widgets, try to compare your design to one or more competitors’ sites for building virtual widgets.
  • Comparison of subtly different designs. After some iterations of comparing quite different design alternatives you will eventually reach the point where you start iterating on one main design—fine tuning it. Here too it’s going to be more cost-effective to compare some of these alternatives simultaneously rather than sequentially. Don’t just assume that Method A for entering dates in a reservation system is going to be more effective than Method B. Try them both. (And then document which one worked better, under what conditions, so that future projects will know that.)

One of the best ways to compare alternative designs is through online usability studies. We’ve done online studies where we simultaneously compared as many as 10 different designs in a between-subjects design. With over 1,000 participants in these studies, we got plenty of data on each design to be able to make quite accurate comparisons between them. And we’ve done these studies in just a few days.

Tip #10: Don’t Re-invent the Wheel

Re-inventing the wheel (sometimes over and over again) is something that many large companies and organizations are quite good at. But it’s not something you want to do when times are tight and resources are scarce. A usability person, especially one who is a central resource and works across multiple projects or even business units, can play a critical role in preventing this kind of redundancy. In any reasonably large organization, it’s easy for the left hand to be unaware of exactly what the right hand is doing. The following are several ways that a usability person might be able to help:

  • Guide project teams toward usable designs. Usability people often have the opportunity to work on multiple projects, bringing broader knowledge and experience to each project that other members may not have. They can help guide projects toward designs or design approaches that they’ve seen work in similar situations.
  • Push back on designs that you know won’t work. Sometimes the most appropriate thing a usability person can do is stop a usability test from being done. If you can cite convincing evidence either from previous usability studies or from the literature that a particular design approach won’t work, then you can save the company the cost of doing a test. Of course this means you need to have that information from the previous studies or from the literature available to you.
  • Be a catalyst for developing a style guide. If your organization doesn’t have a style guide of some type, you should consider pushing for one. But it’s not something one usability person or usability team can do alone. Participation and buy-in from various parts of the organization are needed for a style guide to succeed. Don’t try to boil the ocean. Start small and let it evolve. The most successful style guides I’ve seen are ones that also had accompanying re-usable components that project teams could easily incorporate.


These tips certainly don’t apply to all usability people in all situations. And, of course, they don’t just apply in a down economy, although I believe their importance is heightened by the economy. You need to understand the context and culture of your company or organization (or the one you’re trying to get a job with) and act accordingly. I hope you will consider these tips as ideas to draw upon as you weather the current economic storm and further your career.


Bias, R. & Mayhew, D. (2005). Cost-Justifying Usability, Second Edition: An Update for the Internet Age. Burlington, MA: Morgan Kaufmann.

Buxton, B. (2007). Sketching User Experiences: Getting the Design Right and the Right Design. Burlington, MA: Morgan Kaufmann.

Duarte, N. (2008). slide:ology: The Art and Science of Creating Great Presentations. Sebastopol, CA: O’Reilly Media, Inc.

Dumas, J. & Loring, B. (2008). Moderating Usability Tests: Principles and Practices for Interacting. Burlington, MA: Morgan Kaufmann.

Jarrett, C. & Gaffney, G. (2008). Forms that Work: Designing Web Forms for Usability. Burlington, MA: Morgan Kaufmann.

Lindgaard, G. & Chattratichart, J. (2007). Usability Testing: What Have We Overlooked? CHI 2007 Proceedings, April 28-May 3, 2007, San Jose, CA, USA.

Molich, R. & Dumas, J. (2008, May). Comparative usability evaluation (CUE-4), Behaviour & Information Technology, 27 (3).

Reynolds, G. (2008). Presentation Zen: Simple Ideas on Presentation Design and Delivery. Berkeley, CA: New Riders Press.

Rubin, J. & Chisnell, D. (2008). Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (Second Edition). Hoboken, NJ: Wiley.

Tullis, T. & Albert, B. (2008). Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Burlington, MA: Morgan Kaufmann.

Wilson, R. (2008, November 7). Review: 16 User Interface Prototyping Tools. Retrieved on 2/1/2009 from

Item added to cart.
0 items - $0.00