Usability Testing with Real Data

Usability practitioners run the risk of misreading the results of usability evaluations, either identifying false positives when artificial user data interferes with a user’s product experience or overlooking real problems when they use artificial user data. In this paper, we examine a strategy for incorporating users’ real data in usability evaluations. We consider the value and the challenges of this strategy based on the experiences of product teams in a consumer software company.

Tips for Usability Professionals in a Down Economy

Abstract The usability profession is experiencing the current economic downturn just like everyone else. This article offers ten tips for usability professionals trying to weather this economic storm: Be More Efficient with Your Usability Tests Get More Data with Less Work Deepen Your Usability Skills Broaden Your Other Skills Demonstrate Business Value Keep up on Technology Keep Tabs on Competitors Maximize Your Visibility Compare Design Alternatives Don’t Re-invent the Wheel Specific suggestions and examples are provided for each tip. Introduction The economy stinks. Regardless of where you live in the world, you’ve probably been impacted by the current economic downturn. Most of us know people who have lost their jobs or even their homes. This is probably the worst downturn …

Read more

Extremely Rapid Usability Testing

Abstract The trade show booth on the exhibit floor of a conference is traditionally used for company representatives to sell their products and services. However, the trade booth environment also creates an opportunity, for it can give the development team easy access to many varied participants for usability testing. The question is can we adapt usability testing methods to work in such an environment? Extremely rapid usability testing (ERUT) does just this, where we deploy a combination of questionnaires, interviews, storyboarding, co-discovery, and usability testing in a trade show booth environment. We illustrate ERUT in actual use during a busy photographic trade show. It proved effective for actively gathering real-world user feedback in a rapid paced environment where time is …

Read more

Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale

Figure 1. Our current version of the System Usability Scale (SUS), showing the minor modifications to the original Brookes instrument

Abstract The System Usability Scale (SUS) is an inexpensive, yet effective tool for assessing the usability of a product, including Web sites, cell phones, interactive voice response systems, TV applications, and more. It provides an easy-to-understand score from 0 (negative) to 100 (positive). While a 100-point scale is intuitive in many respects and allows for relative judgments, information describing how the numeric score translates into an absolute judgment of usability is not known. To help answer that question, a seven-point adjective-anchored Likert scale was added as an eleventh question to nearly 1,000 SUS surveys. Results show that the Likert scale scores correlate extremely well with the SUS scores (r=0.822). The addition of the adjective rating scale to the SUS may …

Read more

International Standards for Usability Should Be More Widely Used

Abstract Despite the authoritative nature of international standards for usability, many of them are not widely used. This paper explains both the benefits and some of the potential problems in using usability standards in areas including user interface design, usability assurance, software quality, and usability process improvement. Introduction Why aren’t international standards for usability more widely used? Over the last 20 years, industry and academic experts in human-computer interaction (HCI), ergonomics, and usability have met to put together a wide range of authoritative prerequisites and guidelines for designing, developing, and evaluating usable products. Some of the most important of these standards are discussed in this paper (a more complete list can be found in Bevan, 2005a). Different Types of International …

Read more

A Methodology for Measuring Usability Evaluation Skills Using the Constructivist Theory and the Second Life Virtual World

Abstract The skills of usability analysts are crucial to software success, so mastery of these skills is essential. This study presents a methodology for teaching and measuring usability evaluation skills of graduate students using the constructivist theory, diaries, checklists, and final reports. As part of the study, students spent 4 months as active participants in Second Life, an online virtual world. In the end, most students had a manageable amount of measurable usability evaluation skills in that they could identify a number of heuristic problems with the Second Life software. A smaller number of students had a greater amount of skill; they could explain a heuristic problem with the software and then explain why it was problematic. Practitioner’s Take Away …

Read more

Item added to cart.
0 items - $0.00