Is Technology Becoming More Usable—or Less—and With What Consequences?

My name is Daryle Gardner-Bonneau. I am 56 years old, and technology seems to have left me behind. I never thought I would be standing on the wrong side of the digital divide, but this is where I seem to be, and I appear to be moving ever further from the deepening chasm that divides the “savvy” from the “unsavvy.”

I received a cell phone for Christmas from my husband. Not just a new cell phone, but my FIRST cell phone…ever. As a sole proprietor of a human factors engineering consultancy, working out of a home office, I have little need for a cell phone, except when I travel. The first time I took my phone on a business trip, my husband shut it off before I left home, because you can’t use a cell phone in flight. When I got to my destination, I thought to call him, just to let him know I’d arrived safely. I couldn’t figure out how to turn on the phone. No problem. I’d actually brought the little user manual with me. Surely, it would contain the command to turn on the phone. But it didn’t.

Late last summer, I finally “took the plunge” into Facebook. Not because I had a sudden urge to socially network, but because my son was in Scotland performing as part of the Edinburgh Festival, and one of his friends in the traveling drama group was going to post pictures on Facebook during their adventure. So I subscribed and entered a new “world,” and one which was, and remains, totally alien to me. The interface is a complete mystery, and there’s no help with it, really. I never did find those pictures. The attitude of users seems to be that one must immerse oneself in Facebook to learn its secrets—it’s some sort of rite of passage. Facebook is not going to welcome you with a usable interface.

Finally, in my early 50s, I developed high blood pressure, as many folks are destined to do as they reach middle age. Although it was easily controlled with medication, I wanted to have a home blood pressure monitor, just so I could check it occasionally. So I went to the local drug store. As someone with large arms, I needed the largest cuff size available, so I purchased accordingly. Not large enough. Returned the unit to the store. Eventually found a second one with a slightly larger cuff. Bought it, found it to have a permanent kink in one of the lines (poor quality control), and the cuff still wasn’t large enough. Had to return it, too. On the third try, I settled for a wrist monitor, supposedly less accurate (if you ask a physician), but the best I could do.

Ironically, at the same time this was happening, I was conducting, with colleagues at Western Michigan University and Borgess Visiting Nurse and Hospice (in Kalamazoo, Michigan), a telehealth study involving remote monitoring of vital signs of people with congestive heart failure (CHF) or Chronic Obstructive Pulmonary Disease (COPD). We were finding that about 1/3 of our patients, largely people in their 70s and 80s, could not reliably take and/or interpret their blood pressure with the standard cuff, either because the device was too difficult for them to use independently, or because they could not decipher the meaning of the numbers, despite training—sometimes repeated training.

I don’t self-identify as a usability professional, but as a human factors specialist, probably because I was trained in that profession, specifically, and dealt with large scale hardware and software systems for much of the first half of my career. But in the second half, I have focused on the accessibility and usability of all sorts of systems and consumer products, specifically with reference to older adults and others with special needs (e.g., people with disabilities).

I am beginning to wonder just how effective both professions—human factors and usability (if they are, indeed, different)—have been in the larger scheme of things, when more and more of our population seems to struggle with technology. It’s as if the threshold has somehow been raised for what constitute “basic” usability requirements, and that the minimum level of human competence assumed at the start of the design process is such that many potential users are simply left behind. As an example, have we really reached a point where cell phone designers can assume that their potential users have used a cell phone before and intuitively understand the underlying functionality and control/display relationships in cell phones, obviating the need for any help or basic instruction in a user manual? Although my husband bought me the simplest phone he could find, it was packed with features I neither wanted nor needed—so packed that finding the basic functionality (otherwise known as a telephone) required a “treasure hunt.” There was a camera, a web browser, an instant messaging system, a voice mail system and, no doubt, other stuff I haven’t even found yet. But there was no help and no instructions for how to turn on the phone or navigate the user interface. I wanted a telephone—not a gadget to explore (like Mt. Everest), with so many lovely features, each of which, when attracting my attention, consumed more of my time (and put more money in the pockets of the service provider).

As an academician for a period of my career, I taught students about the dangers of “requirements creep” in system design, and the problems associated with feature creep. Have those concepts become passé? It would seem so, as soon as a company’s revenue generation takes precedence.

In working with older adults, I have watched them struggle to open packaging just to access the product inside. Ever observe someone (of any age, for that matter) attempt to open the cellophane wrapping on a CD or DVD? I have seen older adults hopelessly struggle to read the small type on a medicine bottle and to line up the notches on said child-proof bottle when the notches are engraved or embossed, such that there are no visual (e.g., color) or even tactile cues to tell the user when the notches are lined up so that the bottle will open. I have watched them forlornly hang up the telephone when they encounter an automated interactive voice response (IVR) system that “talks” so fast and gives them so little time to respond that they can’t complete the simple task that they were trying to achieve with the call. With these sorts of observations hitting us square in the face on a daily basis, I would ask again—how successful have our professions been in the grand scheme of things, when these basic sorts of usability elements are apparently ignorable by designers?

Japan has been learning a hard lesson with respect to allowing the needs of older adults to be ignored as technology marches on. As the oldest country in the world, in terms of the number of adults over the age of 65, it is finding that the lack of basic accessibility and usability with respect to all facets and aspects of the technology we encounter in daily life is negatively impacting both its older adults and its younger adults, who now must expend extra effort and energy to care for their elders, whose independence and basic functioning has been compromised by utterly unusable technology. Japan is doing something about it; no country, for example, is currently more highly represented in international standardization efforts on accessibility and usability than Japan. In the U.S., we have yet to learn the lesson. Yes, we’re aging as a nation too, but our baby boomers have not quite reached the age when they will make the U.S. an “older” nation like Japan already is. Will we be prepared when they get there (and it won’t be long)?

And how much time do consumers waste attempting to learn and use each new piece of technology that is already “yesterday’s news”? Recently, I was at a business meeting that ran late into a Friday evening, and was to reconvene the next morning. The nine attendees were tired, and HUNGRY. We drove, in several cars, to a local restaurant, which was packed and had a 45-minute wait for a table. One of the locals suggested an alternative out-of-the way place with good food, probably no waiting, about a mile away. We knew how to get there. But one of our party, brandishing a new iPhone, thought we should call ahead. So we proceeded to stand around as she was trying to fiddle with the iPhone to find the restaurant’s phone number (we didn’t have the spelling or exact street address), and make the call. Grrr… Three of us simply couldn’t take the lure of technology any more (or standing out in the cold); we hopped in the car, drove to the restaurant, and, indeed, were seated immediately. Is there a lesson here somewhere?

Back quite a while ago, when there was a Sun Microsystems, the company banned the use of PowerPoint, because its employees were spending two minutes on the content of their presentations and 16 hours on using PowerPoint’s features to make their slides look pretty. (I probably exaggerate, but you get the point.) Is the technology really making us more productive, or is it simply providing a pleasant (in some cases) user experience at the expense of real productivity?

And finally, how often does technology make more work for us? I find that, having switched to Office 2007 earlier this year, I now must check three e-mail “inboxes” in Outlook, instead of one, because so much of my important e-mail is automatically relegated to the spam folder or the junk folder. And yes, I know how to use the “safe sender” function, but it doesn’t always work. I’m not sure it even usually works. I used to have an e-mail service provider whose user interface was clunky, and who provided almost none of these bells and whistles. BUT, I got my mail with pretty close to 100% reliability.

Looking back at what I’ve written, I know I sound like Andy Rooney. Some readers will probably chuckle at some of this (though I’m not sure exactly why), and others will, no doubt, see me as an idiot. But the questions still stand. Are we being effective in the grander scheme of things? Or have we allowed technology to replace the “water cooler” as the mechanism by which we waste time, procrastinate, and push aside activities that result in “real” productivity? Does the trend toward labeling what we do as optimizing the “user experience” mean optimizing human productivity (as it certainly did when the human factors profession was started), or does it mean something else entirely, with productivity rarely being defined in any measurable terms, or assessed in any compelling way?

Personally, I would like to see human factors professionals and usability professionals have a much bigger impact than we seem to be having. Is it a pipe dream to expect that new users should be able to easily master the user interface of a common, everyday consumer product, and that healthy (and even unhealthy) older adults should be able to use the same technology that younger adults use? And finally, do consumers have a right to expect that the tools and technologies foisted on the general public will, in fact, save us time and increase our productivity? Or is that no longer an important goal for users or developers? Has productivity become, in fact, just a small part of the user experience, overwhelmed by the pleasure to be derived from exploring the latest new technology?

Recently, I was sitting at a table with a group of middle-aged human factors professionals, and we were divided on the issue of whether or not “our generation” would later be in the same position as many of today’s older adults (i.e., left behind by technology). Half were sure that we would be different; the rest of us were sure we’d be in the same boat. But isn’t the issue, really, why any user should ever be left behind if we are practicing good user interface design and usability testing, as we learned them, with their original goals in mind?

Item added to cart.
0 items - $0.00