A Meta-Analytical Review of Empirical Mobile Usability Studies

Abstract

In this paper we present an adapted usability evaluation framework to the context of a mobile computing environment. Using this framework, we conducted a qualitative meta-analytical review of more than 100 empirical mobile usability studies. The results of the qualitative review include (a) the contextual factors studied; (b) the core and peripheral usability dimensions measured; and (c) key findings in the form of a research agenda for future mobile usability research, including open and unstructured tasks are underutilized, interaction effects between interactivity and complexity warrant further investigation, increasing research on accessibility may improve the usability of products and services for often overlooked audiences, studying novel technology and environmental factors will deepen contextual mobile usability knowledge, understanding which hedonic factors impact the aesthetic appeal of a mobile device or service and in turn usability, and a high potential for neuroscience research in mobile usability. Numerous additional findings and takeaways for practitioners are also discussed.

 

Practitioner’s Take Away

The following are key points raised in this paper:

  • Consider the wide range of usability dimensions identified in this study when evaluating the usability of mobile interfaces and applications.
  • Design mobile interfaces and applications that fit particular contextual settings, while being flexible to accommodate others.
  • Focus beyond the interface—usability is an aggregate experience—when developing applications.
  • Study the human factors in HCI, and identify cognitive factors and physical abilities that future mobile devices could be designed to accommodate.
  • Consider the limitations of the laboratory and conduct research involving real (not simulated) and open tasks through field studies that will offer rich and relevant findings.
  • Explore the interplay among dynamic factors (e.g., urgency, noise) and their impact on mobile usability.

Introduction

Mobile devices are becoming increasingly popular, having already reached over one billion mobile subscribers. A recent forecast by the UMTS forum (2005) estimated that the global number of subscribers will be between 1.7 to 2.6 billion for mobile voice and 600 to 800 million for mobile data.  As consumers’ technology fears and adoption costs are reduced, mobile devices are approaching “mainstream” status around the developed world. Mobile devices propose increasing value to consumers found in “anytime, anywhere, and customized” connectivity, communication, and data services.

Although progress has been made in terms of technological innovations, there are obvious limitations and challenges for mobile device interfaces due to the characteristics of mobile devices (i.e., the size of small screens, low resolutions of the displays, non-traditional input methods, and navigational difficulties; Nah, Siau, & Sheng, 2005). Therefore, usability is a more important issue for mobile technology than for other areas, because many mobile applications remain difficult to use, lack flexibility, and lack robustness.

Research Motivation and Objectives

Usability has been the focus of discussion (Venkatesh, Ramesh, & Massey, 2003) and described by varying definitions (Nielsen, 1993; Nielsen & Levy, 1994; Shackel, 1991) in both academia and industry for a long time. Many of these definitions proposed that the central theme of usability is that people can employ a particular technology artifact with relative ease in order to achieve a particular goal within a specified context of use.  The turn of this century marked an increased focus on mobile usability studies for research in the field of Human Computer Interaction (HCI). Although a considerable volume of research on general usability exists, due to the novelty of mobile technology relatively few studies have been conducted focusing on mobile usability. Even worse, only 41% of mobile usability papers are empirical1 in nature (Kjeldskov & Graham, 2003). Moreover, there is no qualitative study on the usability dimensions considered in such mobile studies. Thus, our research aims to fill this gap and in doing so will also provide a roadmap for future mobile usability studies that will be of value to this relatively young research area. Specifically, this study addresses the following research question: What are the key formation and evaluation dimensions of usability in mobile technology usability studies?

To this end, this paper describes the qualitative review of more than 100 published empirical mobile usability studies. First, following a brief review of a usability evaluation framework in a non-mobile context, a framework of contextual usability for mobile computing2 is presented. Next, by using the proposed framework a qualitative review of empirical mobile usability studies is presented along with a discussion on the taxonomy used during the coding in this study. The results emerging from the comprehensive review of mobile usability studies are then presented, which include (a) the contextual factors studied, (b) the core usability dimensions defined and measured, (c) the peripheral usability dimensions explored, and (d) key findings in the form of a research agenda. Finally, this paper discusses the contributions and limitations of the research.

Literature Review and a Mobile Usability Framework

Usability studies have their roots as early as the 1970s in the work of “software psychology.” Over time, the focus of this body of research has shifted and most recently centered on the relevance of context of use for usability. The concept of context of use,as it relates to usability, emerged out of the work of several scholars (Bevan & Macleod, 1994; Shami, Leshed, & Klein, 2005; Thomas & Macredie, 2002) who attempted to identify additional variables that may impact usability. Varied situational contexts will result in emerging usability factors, making traditional approaches to usability evaluation inappropriate. The significance of this area emerges from its importance in yielding a reasonable analysis during a usability study (Maguire, 2001; Thimbleby, Cairns, & Jones, 2001). Furthermore, during the evolution of HCI mentioned above, the conceptualization of usability has varied extensively. The broad set of definitions and measurement models of usability complicate the generalizability of past studies at the level of the latent usability variable. Therefore, a usability study gains value when it is based on a standard definition and operationalization of usability. In the following section, we review a set of key approaches in evaluating usability as communicated in previous work.

Approaches to Usability Evaluation

Different approaches to usability evaluation have been proposed in different contexts such as websites (Agarwal & Venkatesh, 2002), digital libraries (Jeng, 2005), audiovisual consumer electronic products (Han, Yun, Kwahk, & Hong, 2001; Kwahk & Han, 2002), and many others. In the context of website usability, Agarwal and Venkatesh  (2002) presented five categories (i.e., content, ease of use, promotion, made-for-the-medium, and emotion) and subcategories (i.e., relevance, media use, depth/breadth, structure, feedback, community, personalization, challenge, plot, etc.) of website usability evaluation components based on Microsoft Usability Guidelines (MUG ; see Keeker, 1997). They also discussed the development of an instrument that operationalizes the measurement of website usability. Recently, employing the MUG-based model, Venkatesh and Ramesh (2006) explored an examination of differences in factors important in designing websites for stationary devices (e.g., personal computers) versus websites for wireless mobile devices (e.g., cell phones and PDAs). In the context of digital libraries, Jeng (2005) proposed an evaluation model of usability for digital libraries on the basis of the usability definition of ISO 9241-11 (ISO, 2004). The model included four usability evaluation comports: effectiveness, efficiency, satisfaction, and learnability. The satisfaction of digital libraries was further evaluated by the areas of ease of use, organization of information, clear labeling, visual appearance, contents, and error corrections.

In the context of audiovisual consumer electronic products (e.g., VCR, DVD players, etc.), Han et al. (2001; Kwahk & Han, 2002) suggested a usability evaluation framework that was similar to the subsequent work of Hassanein and Head (2003). The framework consisted of two layers: formation of usability and usability evaluation. The formation of usability layer had four contextual-components (i.e., product, user, user activity, and environment) that were well accepted as the principal components in a human-computer interaction upon which good system design depends (Kwahk & Han, 2002; Shackel, 1991). The usability evaluation layer was organized with three groups of variables: design variables (i.e., product interface features), context variables (i.e., evaluation context), and dependent variables (i.e., measures of usability).

Interestingly, there is no usability evaluation framework that yet exists in the context of a mobile computing environment. We believe it is a critical omission and an important topic warranting investigation. The next section looks at the key formative factors of usability as explored in contextual mobile usability studies. From this review, we propose a contextual usability framework for a mobile computing environment.

A Contextual Usability Framework for a Mobile Computing Environment

The work of several scholars (Bevan & Macleod, 1994; Shami et al., 2005; Thomas & Macredie, 2002) who attempted to identify additional variables that may impact usability and subsequently adoption, led to the conceptual emergence of context of use (herein referred to as context)as it relates to usability, also referred to as contextual usability.  Several frameworks encapsulating context have been proposed (Han et al., 2001; Lee & Benbasat, 2003; Sarker & Wells, 2003; Tarasewich, 2003; Yuan & Zheng, 2005). While there may be other usability frameworks that attempt to capture the essence of context, the models cited here provide a representative set of work in this area. From these we adapted the framework proposed by Han et al. (2001) because it offers considerable detail for each dimension they identified.

On the basis of the discussion on approaches to usability evaluation and the framework proposed by Han et al. (2001) and Kwahk and Han (2002), we propose a contextual usability framework for a mobile computing environment. The framework is depicted in Figure 1 and contains three elements. First, the outer circle shows the four contextual factors (i.e., User, Technology, Task/Activity, and Environment) described earlier as impacting usability. Second, the inner circle shows the key usability dimensions (i.e., Effectiveness, Efficiency, Satisfaction, Learnability, Flexibility, Attitude, Operability, etc.). Third, the box on the top of contextual factors shows a list of consequences (i.e., improving systems integration, increasing adoption, retention, loyalty, and trust, etc.).

Compared to the framework proposed by Han et al. (2001) and Kwahk and Han (2002), there are several advantages of the suggested mobile usability framework. Although the previous frameworks proposed by Han et al. (2001) and Kwahk and Han (2002) are comprehensive, they are difficult to follow due to formation and evaluation dimensions being merged into one diagram. Thus, the suggested framework depicted in Figure 1 represents a simple yet direct way to identify and address the various contextual mobile usability dimensions. In addition, with its central focus on usability, it offers specific guidance on the implementation of any interface/interaction project along with potential outcomes.  

In addition, two modifications are introduced in terms of nomenclature for mobile contextual usability. First, “Technology” replaces “Product,” as this term helps conceive the system that a user may interact with a greater set of components, instead of simply the device or application itself. One example of this is found in the case of mobile usability where the inclusion of the wireless network is likely in addition to the mobile device (i.e., the product) when studying usability of a mobile product or service. Because mobile usability is mainly related to mobile technology, which continually improves the limitations of mobile interfaces and its applications, the technological factor of a mobile usability framework is an important and unique component that needs to be taken care of. Second, “Task/Activity” replaces “Activity,” as the former term appears more commonly in usability literature when describing the nature of users’ interaction with the technology. In addition, a list of consequences of usability was added to the framework as an output of usability evaluations. 

These four variables (i.e., user, task/activity, environment, technology) were used for the presentation of the qualitative review of previous empirical research3 that relates to the usability assessment of mobile applications and/or mobile devices. The benefit of using these variables for the literature review is found in both the structure it provides for the discussion to follow, as well as to help highlight any areas that are lacking investigation.

 


 

1Empirical studies deal with empirical evidence that is derived by means of observation, experiment, or experience. In this study, we further classified empirical evidence as survey, interview, observation, and device/server logs in either a lab, the field, or both settings, as well as focus groups.  

2Even though we mainly focus on mobile usability, our adapted framework can be used for usability studies in general. 

3Since this study focuses on mobile usability, we only reviewed empirical studies on mobile usability.

Methods

Through systematic procedures of coding, recording, and computing, a meta-analysis is an organized way to summarize, integrate, and interpret selected sets of empirical studies (Glass, McGaw, & Smith, 1981; Lipsey & Wilson, 2000).  The meta-analytical review for this study began with the search for empirical mobile usability studies literature from the year 2000 through 2010. To this end, we used multiple databases to minimize the chance of omitting relevant studies. We continued with cross-referencing the references of the retrieved studies. Hand searching of appropriate journals in this research included journals ranked among the top 10 in terms of perceived quality, as well as journals deemed relevant to the field of usability by the authors. Specific criteria were set for the selection of articles sought in this literature review: (a) a mobile technology was studied, (b) the study was empirical in nature (see footnote 1 of the Literature Review and a Mobile Usability Framework section), and (c) the time frame for included studies was from 2000 through 2010. A conscious decision was made to not limit the reviewed literature to peer-reviewed journal articles, as it would significantly reduce the reviewed articles, given the relative infancy of the mobile usability field. The above procedure resulted in the identification of 100 empirical mobile usability studies. An earlier analysis of the first 45 studies retrieved was presented at a conference in 2006 (Coursaris & Kim, 2006); while most statistics were not reported in that publication, the same analysis was performed on both samples (i.e., studies up to 2006 vs. all 100 studies retrieved by the end of 2010, so as to observe scholarship trends in mobile usability between the two temporal reference points.

Results of Analysis

The literature review of empirical research on mobile usability performed appears in the Appendix. The review results are summarized in terms of the context defined in the study, key usability dimensions measured, research methodology used, sample size, and key findings.

The following sets of analysis pertain to the contextual factors studied among the 100 empirical mobile usability studies reviewed. In doing so, the independent variables studied are described under each of the four contextual framework categories of Figure 1. Overall, empirical mobile usability studies have been focused on investigating task characteristics (47%), followed by technology (46%), environment (14%), and user characteristics (14%; where single-nation populations in studies are not included, albeit one might consider them as cultural studies depending on the frame of reference). (Note: distribution exceeds 100% as multiple areas may have been studied in a single study.) Hence, there is a lack of empirical research on the relevance of user characteristics and the impact of the environment on mobile usability. For example, because on-screen keyboards are now a standard of smartphone technology, it would be important to understand the optimal design of on-screen smartphone/mobile device keyboards according to target user groups and their characteristics.

By contrast with our earlier data set of 45 empirical studies published by 2006, the distribution of research emphasis included research on task (56%), user (26%), technology (22%), and environment characteristics (7%). It is interesting to note that the proportion of studies that considered the environment doubled, and part of this increased emphasis is a result of a number of recent studies that compared and contrasted different usability testing methods and environments. Also, many more articles in this study’s larger sample appear to focus on tasks and related technologies far more frequently than on the other two dimensions, i.e., the user and the environment. Thus, it appears that the human needs to be entered back in the Human-Computer Interaction investigations that focus on mobile usability.

Task characteristics: Open and unstructured tasks, and interactivity and complexity understudied

The framework called for the identification of either closed or open tasks. Closed tasks were used most frequently (58%), and examples would include checking the list of received calls, finding a “Welcome Note” on a mobile website or a mobile app, enabling the vibrating alert, setting the phone on silent mode, and other tasks that have a predefined state or outcome. Open tasks were used in 35% of studies, and examples include interacting with a network of services using verbal or visual information, keeping a pocket diary and filling in forms with each use of the Internet, logging in to websites and rewriting web diaries that were first written on a pocket diary, and other tasks that do not have a pre-defined outcome (i.e., the outcome is user dependent). Nine percent of reviewed studies did not report on tasks. Hence, there is a relative lack of research involving open and unstructured tasks. Also, effects of task interactivity and task complexity on mobile usability were not investigated. With the increasingly important role of mobile devices in academia, an important question that arises is to what extent can such devices enhance a learner’s experience; exploring the potential interaction effect between task interactivity and task complexity can help inform the design and use of mobile technology, applications, and services in the classroom or education environments at large.

This research design pattern is fairly consistent with our earlier analysis from 2006, where closed-open tasks were used 69% and 22% respectively (with 9%, again, not reporting). Hence, the same research gap exists surrounding open and unstructured tasks, and factors such as interactivity, complexity, and others as they relate to mobile usability.

User characteristics: A narrow focus on studied user dimensions is prevalent

The most prominent user-related variable studied in empirical mobile usability research was (prior) experience, focusing on either novices (16%), experts (13%), or both (16%). Culture (3%) and job-specific roles (i.e., physicians, engineers; 8%) were also measured. Disability was only explored twice (i.e., 2%), examining the role of technology in assisting users with visual impairment and memory loss respectively. No empirical mobile usability research studied the role of gender or age, and mobility was investigated in just 6% of studies. From these statistics it becomes apparent that research has been limited in both the range and frequency of user characteristics studied. Examples of such limitations are found in the myriad of disabilities that can negatively impact a mobile user’s experience or even prohibit the use of certain services, and yet are extremely underserved.

Comparing these statistics with our 2006 sample, a small shift away from convenient, novice samples (from 25% to 16%) to an examination of the impact of experience (from 9 to 16%) on the dependent constructs appears. Cross-cultural studies did not emerge significantly during this period, which is somewhat surprising considering the uptake of mobile devices around the world; by contrast, work-related context was investigated proportionately twice as much, while convenient samples of students were utilized at similar rates. Thus, the same need and corresponding opportunities for user-centered empirical mobile usability studies still exists.

Technology characteristics: Enabling technology beyond the interface is overlooked in mobile studies

The most popular variable investigated in these studies pertaining to the technology used was the interface. These studies involved mobile phones (44%), PDAs (38%), Pocket PCs (5%), and various interfaces (19%) including a desktop, a tablet PC, a discman, and wearable or prototype devices. Again, these frequencies exceed 100% because a few studies involved multiple devices. The above distribution was quite similar to the 2006 sample. Hence, the lack of research as it relates to technology beyond the interface continues. For example, whether the lack of support for Flash by iOS (available at the time this paper was written) significantly impacts the usability of mobile (iPhone/iPad) users, or to what extent does network interoperability enhances a device’s mobile usability would be of significant value particularly among the practitioner community, while extending previously validated research models and theories in the mobile domain.

Environment characteristics: Area with greatest potential for future mobile usability research

Eleven percent of studies explored factors as they relate to the environment. This focus has shown an increase since the 2006 reported research incidence rate of 7%, partly due to an emphasis on usability evaluation methods becoming more relevant and scholars’ interest in comparing lab to field-based methods. Lighting and noise levels previously studied were joined by studies on sound, temperature, acceleration, humidity, as well as social aspects. Hence, physical, psychosocial, and other environment-specific factors present a significant opportunity for future research in mobile usability. For example, little is known about the impact of co-location (i.e., a mobile user being in physical proximity to other individuals) on the use of a mobile device (e.g., which types of applications are more likely to be used when alone vs. collocated with familiar or unfamiliar individuals). Such insight could further advance the contextual designs of mobile devices, whether through user-configured settings, sensors, or other means.

Methodology characteristics: A call for neuroscience research in mobile usability 

The final set of analysis pertains to the experiment setup and methodology. Laboratory studies were conducted most often (47%), followed by field studies (21%), while 10% of studies involved both. Hence, lab-tested mobile usability research was dominant, which was also the trend found in our 2006 sample. Next, multiple methodologies were identified in these studies, including questionnaires (61%); device data (33%); direct observation (7%); focus groups (7%); discussions (3%); and voice mail and web mail diaries, as well as Think Aloud Method (each at 2%); and single studies leveraging a usability test/expert, evaluation/participatory, design/card, sorting/task analysis. Frequencies of methodology used exceed 100% because most studies (45%) involved a multi-method approach. Specifically, device data were most commonly triangulated with questionnaire (13%), observation (5%), or interview data (4%). However, with only 13% of the studies being the case, there is limited research that contrasts self-reported data with device data, something that has remained unchanged from the results of our 2006 sample. Lastly, there were no studies involving neuroscience, an area that is of particular importance in mobile usability. With the associated cost of the needed technology to employ related methods, e.g., eye tracking and brain imaging, the area is prime for growth and novel contributions to the field. Knowledge dissemination outlets can both benefit and support the fueling of such research through special calls for related works.

Analysis of Mobile Usability Measurement Dimensions

Because the focus of this study was on the usability dimensions measured in empirical mobile usability studies, we reorganized them in terms of usability dimensions. Table 1 presents a summary of these 31 measured usability dimensions.

Table 1. Frequency of Usability Measures Used in the Reviewed Studies

Original List of Measures Collapsed List Of Measures
MEASURES SOURCES COUNT MEASURES UNIQUE COUNT %
Efficiency Barnard, Yi, Jacko, & Sears, 2005; Bohnenberger, Jameson, Kruger, & Butz, 2002; Brewster, 2002; Brewster & Murray, 2000; Bruijn, Spence, & Chong, 2002; Butts & Cockburn, 2002; Buyukkoten, Garcia-Molina, & Paepcke, 2001; Chin & Salomaa, 2009; Chittaro & Dal Cin, 2002; Chittaro & Dal Cin, 2001; Clarkson, Clawson, Lyons, & Starner, 2005; Costa, Silva, & Aparicio, 2007; Duda, Schiel, & Hess, 2002; Fitchett & Cockburn, 2009; Fithian, Iachello, Moghazy, Pousman, & Stasko, 2003; Goldstein, Alsio, & Werdenhoff, 2002; Gupta & Sharma, 2009; Huang, Chou, & Bias, 2006; James & Reischel, 2001; Jones, Buchanan, & Thimbleby, 2002; Kaikkonen, Kallio, Kekäläinen, Kankainen, & Cankar, 2005; Kim, Chan, & Gupta, 2007; Kjeldskov & Graham, 2003; Kjeldskov, Skov, & Stage, 2010; Koltringer & Grechenig, 2004; Langan-Fox, Platania-Phung, & Waycott, 2006; Liang, Huang, & Yeh, 2007; Lindroth, Nilsson, & Rasmussen, 2001; Massimi & Baecker, 2008; Nagata, 2003; Nielsen, Overgaard, Pedersen, Stage, & Stenild, 2006; Olmsted, 2004; Poupyrev, Maruyama, & Rekimoto, 2002; Pousttchi & Thurnher, 2006; Rodden, Milic-Frayling, Sommerer, & Blackwell, 2003; Ross & Blasch, 2002; Ryan & Gonsalves, 2005; Seth, Momaya, & Gupta, 2008; Shami et al., 2005; Sodnik, Dicke, Tomazic, & Billinghurst, 2008; Wigdor, & Balakrishnan, 2003 41 Efficiency 61 33
Errors Andon, 2004; Brewster & Murray, 2000; Butts & Cockburn 2002; Cheverst, Davies, Mitchell, Friday, & Efstratiou, 2000; Danesh, Inkpen, Lau, Shu, & Booth, 2001; Fitchett & Cockburn, 2009; Gupta & Sharma, 2009; Huang et al., 2006; James & Reischel, 2001; Jones, Buchanan, & Thimbleby, 2002; Juola & Voegele 2004; Kaikkonen, 2005; Kaikkonen et al., 2005; Kim, Kim, Lee, Chae, & Choi, 2002; Kjeldskov & Graham, 2003; Koltringer & Grechenig, 2004; Langan-Fox et al., 2006; Lehikoinen & Salminen, 2002; Lindroth et al., 2001; MacKenzie, Kober, Smith, Jones, & Skepner, 2001; Massimi & Baecker, 2008; Nagata, 2003; Palen & Salzman, 2002; Ross & Blasch, 2002; Ryan & Gonsalves, 2005; Waterson, Landay, & Matthews 2002; Wigdor & Balakrishnan, 2003 27 Effectiveness 49 27
Ease of Use Cheverst et al., 2000; Chong, Darmawan, Ooi, & Binshan, 2010; Cyr, Head, & Ivanov, 2006; Ebner, Stickel, Scerbakov, & Holzinger, 2009; Ervasti & Helaakoski, 2010; Fang, Chan, Brzezinski, & Xu, 2003; Fithian et al., 2003; Hinckley, Pierce, Sinclair, & Horvitz, 2000; Hsu, Lu, & Hsu, 2007; Jones, Buchanan, & Thimbleby, 2002; Kim et al., 2002; Kim et al., 2007; Kim et al., 2010; Li & Yeh, 2010; Licoppe & Heurtin, 2001; Mao, Srite, Thatcher, & Yaprak, 2005; Massey, Khatri, & Ramesh, 2005; Olmsted, 2004; Pagani, 2004; Palen & Salzman, 2002; Pousttchi & Thurnher, 2006; Qiu, Zhang, & Huang, 2004; Roto, Popescu, Koivisto, & Vartiainen, 2006; Ryan & Gonsalves, 2005; Wu & Wang, 2005; Xu, Liao, & Li, 2008 26 Satisfaction 18 10
Usefulness Bødker, Gimpel, & Hedman, 2009; Chong et al., 2010; Cyr et al., 2006; Ebner et al., 2009; Ervasti & Helaakoski, 2010; Fang et al. 2003; Fithian et al., 2003; Hsu et al., 2007; Hummel, Hess, & Grill, 2008; Kim et al., 2010; Li & Yeh, 2010; Mao et al., 2005; Pagani, 2004; Palen & Salzman, 2002; Pousttchi & Thurnher, 2006; Wu & Wang, 2005; Xu et al., 2008 17 Accessibility 15 8
Effectiveness Barnard et al., 2005; Bohnenberger et al., 2002; Brewster, 2002; Brewster & Murray, 2000; Chin & Salomaa, 2009; Costa et al., 2007; Duh, Tan, & Chen, 2006; Goldstein et al., 2002; Huang et al., 2006; Kleijnen, Ruyter, & Wetzels, 2007; Liang et al., 2007; Nielsen et al., 2006; Pousttchi & Thurnher, 2006; Ryan & Gonsalves, 2005; Shami et al., 2005; Sodnik et al., 2008 16 Learnability 8 4
Satisfaction Dahlberg & Öörni, 2007; Ebner et al., 2009; Huang et al., 2006; Hummel et al., 2008; Juola & Voegele, 2004; Kallinen, 2004; Kim et al., 2002; Kim et al., 2007; Kleijnen et al., 2007; Lindroth, 2001; Nielsen et al., 2006; Olmsted, 2004; Palen & Salzman, 2002; Ryan & Gonsalves, 2005; Shami et al., 2005 15 Workload 7 4
Accuracy Barnard et al., 2005; Burigat, Chittaro, & Gabrielli, 2008; Clarkson et al., 2005; Duh et al., 2006; Keeker, 1997; Koltringer & Grechenig, 2004; Olmsted, 2004; Thomas & Macredie, 2002; Wigdor & Balakrishnan, 2003; Wu & Wang, 2005 10 Enjoyment 4 2
Learnability Butts & Cockburn, 2002; Dahlberg & Öörni, 2007; Fithian et al., 2003; Kaikkonen et al., 2005; Lindroth, 2001; MacKenzie et al., 2001; Roto et al., 2006; Ryan & Gonsalves, 2005 8 Acceptability 3 2
Workload Barnard et al., 2005; Chan, Fang, & Brzezinski, 2002; Chin & Salomaa, 2009; Jones, Jones, Marsden, Patel, & Cockburn, 2005; Li & McQueen, 2008; Seth et al., 2008; Sodnik et al., 2008 7 Quality 3 2
Accessibility King & Mbogho, 2009; Mao et al., 2005; Pagani, 2004; Palen, Salzman & Youngs, 2001; Suzuki et al., 2009 6 Security 3 2
Reliability Andon, 2004; Barnard et al., 2005; Costa et al., 2007; Kleijnen et al., 2007; Lin, Goldman, Price, Sears, & Jacko, 2007; Wu & Wang, 2005 6 Aesthetics 4 2
Attitude Goldstein et al., 2002; Juola & Voegele 2004; Khalifa & Cheng, 2002; Palen & Salzman, 2002; Strom, 2001 5 Utility 2 1
Problems Observed Kaikkonen, 2005; Kaikkonen et al., 2005; Kjeldskov & Graham, 2003; Nielsen et al., 2006 4 Memorability 2 1
Enjoyment Cyr et al., 2006; Ebner et al., 2009; Hummel, 2008; Kim et al., 2010 4 Content 2 1
Acceptability Andon, 2004; Butts & Cockburn, 2002; Juola & Voegele 2004 3 Flexibility 1 1
Quality Barnard, Yi, Jacko, & Sears, 2007; Bødker et al., 2009; Kleijnen et al., 2007 3 Playfulness 1 1
Security Andon, 2004; Fang et al., 2003; Kim et al., 2007 3

 

 

 

Aesthetics Cyr et al., 2006; Li & Yeh, 2010; Wang, Zhong, Zhang, Lv, & Wang, 2009 3

 

 

 

Utility Duda et al., 2002; Hassanein & Head, 2003 2

 

 

 

Operability Chittaro, Dal Cin, 2002; Kaikkonen et al., 2005 2

 

 

 

Memorability Langan-Fox et al., 2006; Lindroth et al., 2001 2

 

 

 

Responsiveness Barnard et al., 2007; Kleijnen et al., 2007 2

 

 

 

Content Kim, Kim, & Lee, 2005; Koivumäki, Ristola, & Kesti, 2006 2

 

 

 

Attractiveness Lin et al., 2007 1

 

 

 

Flexibility Cheverst et al., 2000 1

 

 

 

Playfulness Fang et al., 2003 1

 

 

 

Technicality Hummel et al., 2008 1

 

 

 

Availability Pagani, 2004 1

 

 

 

Functionality Pagani, 2004 1

 

 

 

Interconnectivity Andon, 2004 1

 

 

 

Integrity Costa et al., 2007 1

 

 

 

A preliminary inspection of Table 1 shows that the constructs of efficiency, errors, ease of use, effectiveness, satisfaction, and learnability are most commonly measured in empirical mobile usability studies. All of these measures were defined in the work of Han et al. (2001) on the classification of performance and image/impression dimensions with slight variations. The measure of errors was defined by Nielsen (1993) as the “number of such actions made by users while performing some specified task” (p.32). Han et al. (2001) addressed errors through two measures: (a) error prevention (i.e., “ability to prevent the user from making mistakes and errors” p. 147) and (b) effectiveness (i.e., “accuracy and completeness with which specified users achieved specified goals” p.147). With respect to the reviewed literature, mobile usability studies measured the error rate, as opposed to error prevention, associated with the system. Hence, we collapsed the errors, accuracy, and problems observed measures found in this literature review with effectiveness (effectiveness offering a broader definition and operationalization). This broader interpretation of effectiveness may be extended to encompass the extent to which a system achieves its intended objective, or simply put, its usefulness. Hence, the latter may also be collapsed with effectiveness. Similarly, the second order measure of efficiency often attempts to capture the first-order factor of ease of use. This is supported conceptually, because the “easier” a system is to use the less resources are consumed during the task. Hence, ease of use may be collapsed with efficiency. Furthermore, Shackel defined attitude as the “level of user satisfaction with the system” (2009, p 341). Han et al. (2001) defined satisfaction as “the degree to which a product is giving contentment or making the user satisfied” p.147.  Hence, attitude (as defined in these usability studies) may be collapsed into the single measure of satisfaction. It should be noted that the frequency count for each collapsed criterion is based on unique counts of a particular publication (i.e., if errors and effectiveness were found in the same study, the publication would count these only once for the unique count). In addition, accessibility had been studied in most cited studies as the degree to which a system was accessible; this was just to clarify from the scope accessibility in the context of vulnerable/disabled users. Hence, other measures found in studies that speak to this concept include reliability, responsiveness, availability, functionality, and interconnectivity, and can be collapsed under accessibility. Lastly, attractiveness speaks to the broader concept of aesthetics, and integrity is a security dimension, so these can be grouped respectively.

Upon review of the measures’ relative appearance in the reviewed literature the three core constructs for the measurement of usability appear to be the following:

  • Efficiency: Degree to which the product is enabling the tasks to be performed in a quick, effective, and economical manner, or is hindering performance.
  • Effectiveness: Accuracy and completeness with which specified users achieved specified goals in a particular environment.
  • Satisfaction: The degree to which a product is giving contentment or making the user satisfied.

The above findings are arguably neither surprising nor favorable for the field, as these factors have been set as the standard for more than a decade, regardless of significant technology advances and use settings and scenarios—the usability scholar’s lens has gone unchanged. However, the growing popularity of games and similarly engaging and hedonically oriented experiences in the use of mobile devices might suggest that both the factors studied and the definitions set forth for mobile usability may be revisited before too long.

The remaining measures identified in Table 1 reflect the peripheral dimensions measured in empirical mobile usability studies cited in the Appendix, including Accessibility (8%), Learnability (4%), Workload (4%), Aesthetics (2%), Enjoyment (2%), Acceptability (2%), Quality (2%), Security (2%), Utility (1%), Playfulness (1%), Memorability (1%), Content (1%), and Flexibility (1%).

Recommendations and Conclusion

To the best of our knowledge, this research is the first analysis of the contextual factors and measurement dimensions investigated in the empirical body of knowledge of mobile usability studies published to-date by leveraging a proposed qualitative review framework for mobile usability. The results described earlier enhance our understanding of mobile usability research considerations and serve as the basis for a research agenda in this field. This domain would benefit by having a further emphasis placed on the complexity of contextual usability and answering such research questions as those within and/or between each of the following areas:

Technology: Beyond the interface—how do mobile technology components beyond the interface (e.g., network connectivity reliability, memory) impact the usability of mobile devices?

  • User: Study the human factors in HCI—what other user characteristics (e.g., cognitive aptitude, mental models, physical ability) should be considered when studying mobile usability? More research is also needed on variables previously investigated (e.g., experience and efficacy).
  • Task/Activity: Real world–real tasks—how do task complexity and task interactivity impact mobile usability? By considering these two dimensions and engaging in research involving open tasks in a field setting approximates real-world situations and results improve in their external generalizability.
  • Environment: Usable anytime, anywhere—how do conditions in the environment impact mobile usability? A higher rate of field studies and/or complex lab studies will enhance our understanding of such dynamic factors (e.g., urgency, wind) and their effects on mobile technology.

The results of the meta-analytical review of empirical research on mobile usability identified 31 usability-related measures. The main usability measures studied in mobile usability studies are efficiency, effectiveness, and satisfaction, which are actually consistent with the standard diminutions of other general usability studies (Brereton, 2005; Hornbaek & Law, 2007; Nielsen & Levy, 1994). However, these usability dimensions are more important in mobile applications and technologies because of the inherent characteristics of mobile devices, including small screens, low display resolutions, limited input methods, difficult-to-use interface, and many others.  Moreover, the three core dimensions of mobile usability measurements (i.e., effectiveness, efficiency, and satisfaction) reflect the ISO 9241 standard making a strong case for its use in related future studies. The use of this standard would allow for consistency with other studies in the measurement of general usability (Brereton, 2005; Hornbaek & Law, 2007; Nielsen & Levy, 1994).

Beyond the benefit of a standard view of usability, three key findings emerge from the above data. First, any single peripheral usability dimension was measured in fewer than 8% of the studies reviewed. Second, accessibility, in the context of vulnerable populations/disabled users, appears to be one of the most underserved research areas having been studied only twice in this set of 100 mobile usability studies reviewed. This observation may come as a surprise, given the growing popularity of accessibility research in less conventional (e.g., non-IS, non-peer-reviewed) publication outlets, and the increasing levels of legislative support and community interest. Further exploration of this construct, including its role with the remaining usability dimensions, is warranted. Third, aesthetic/hedonic constructs were studied in just 2% of empirical mobile usability studies, even though there is support for the effect of such factors on performance and satisfaction (Coursaris, Swierenga, & Watrall, 2008). These findings in turn call for a critical review of the current operationalization of usability as several dimensions are not captured in the international standard defined by ISO 9241 in 1998.

After more than a decade’s worth of research that centers on the standard usability measures articulated by ISO in 1998, our understanding of their inter-relationships is mature. The domain could arguably benefit by extending the defined core by considering a subset of the peripheral dimensions so as to allow for an even deeper understanding of mobile usability. Adding to the earlier research agenda, the following measurement considerations are outlined for future research: (a) accessibility—increasing research in this area may improve the usability of products and services for often overlooked audiences; (b) hedonics—which factors impact the aesthetic appeal of a mobile device or service, and how do they impact usability?; and (c) usability—what are the relationships between various usability measurement dimensions? Should usability be redefined to reflect additional utilitarian and/or hedonic dimensions?

This study offers several contributions and implications for both researchers and practitioners. On the academic level, first, this breakthrough meta-analytical research is the first attempt, to our knowledge, to offer a comprehensive view of usability dimensions found in empirical mobile usability studies. Second, the identification of a common measurement metric with a review framework would support a future quantitative analysis of mobile usability studies at the construct level (i.e., a meta-analysis of measured usability dimensions in a mobile setting). In turn, this could offer a unified view of empirical mobile usability studies. We hope that the framework and the findings of this study will be used as the basis for continuing research that aims to enhance our understanding of mobile usability considerations and measurement.

This study also provides a couple of important implications for practitioners. First, this study summarizes the existing mobile usability research findings and organizes them based on a set of usability contextual factors and measurement dimensions using a comprehensive mobile usability framework.  The results of this study encourage practitioners to pay more attention to the key contextual factors and mobile usability measurement dimensions when they develop their mobile products and/or services. Second, because the current mobile usability evaluation process is more of a “fuzzy art” without a structured framework and there is a need for a more structured approach to evaluate mobile usability, the mobile usability framework identified by this study can be used during a usability evaluation of mobile products and/or services.

As with all research, this study comes with the caveat of the following limitations. First, even though the authors searched intensively for all possible research articles of empirical mobile usability studies, the case may be that relevant articles were omitted in this process. Second, even though the meta-analysis of this study followed the procedures suggested by Glass et al., (1981), Lipsey and Wilson (2000), and Rosenthal (1991), some subjective decisions were made when two mobile usability dimensions were collapsed into a single measure. Although arguments were given, this could be a limitation of a subset of the reported results.

Beyond the benefit of a standard view of usability, an important opportunity for future research arises from the data in Table 1. Accessibility appears to be one of the most underserved research areas. Again, this observation may come as a surprise, given the growing popularity of accessibility research in less conventional (e.g., non-IS, non-peer-reviewed) publication outlets, and the increasing levels of legislative support and community interest. Further exploration of this construct, including its relationship with the remaining usability dimensions, is warranted.

In closing, it is hoped that the above findings and the suggested research agenda will stimulate further research in this domain, the results of which expand both the scholarly body of knowledge, but also have direct and tangible benefits for everyday users of mobile technology.

References

  • Agarwal, R., & Venkatesh, V. (2002). Assessing a firm’s web presence: A heuristic evaluation procedure for the measurement of usability. Information Systems Research, 13(2), 168-186.
  • Andon, C. (2004). Usability analysis of wireless tablet computing in an academic emergency department.  Master of Biomedical Informatics, Oregon Health & Science University, Portland, Oregon. Retrieved from http://www.ohsu.edu/library/newbooklists/newbooks200406.shtml  
  • Barnard, L., Yi, J. S., Jacko, J. A., & Sears, A. (2005). An empirical comparison of use-in-motion evaluation scenarios for mobile computing devices. International Journal of Human-Computer Studies, 62(4), 487-520.
  • Barnard, L., Yi, J. S., Jacko, J. A., & Sears, A. (2007). Capturing the effects of context on human performance in mobile computing systems. Personal & Ubiquitous Computing, 11(2), 81-96.
  • Bevan, N., & Macleod, M. (1994). Usability measurement in context. Behavior and Information Technology, 13, 132-145.
  • Bødker, M., Gimpel, G., & Hedman, J. (2009). Smart phones and their substitutes: Task-medium fit and business models. Paper presented at the Eighth International Conference on Mobile Business, Dalian, Liaoning, China.
  • Bohnenberger, T., Jameson, A., Kruger, A., & Butz, A. (2002). Location-aware shopping assistance: Evaluation of a decision-theoretic approach. Paper presented at the Mobile HCI 2002, Pisa, Italy.
  • Brereton, E. (2005). Don’t neglect usability in the total cost of ownership. Communications of the ACM, 47(7), 10-11.
  • Brewster, S. (2002). Overcoming the lack of screen space on mobile computers. Personal and Ubiquitous Computing, 6, 188-205.
  • Brewster, S., & Murray, R. (2000). Presenting dynamic information on mobile computers. Personal and Ubiquitous Computing, 4, 209-212.
  • Bruijn, O. D., Spence, R., & Chong, M. Y. (2002). RSVP browser: Web browsing on small screen devices. Personal and Ubiquitous Computing, 6(4), 245-252.
  • Burigat, S., Chittaro, L., & Gabrielli, S. (2008). Navigation techniques for small-screen devices: An evaluation on maps and web pages. International Journal of Human-Computer Studies, 66(2), 78-97.
  • Butts, L., & Cockburn, A. (2002). An evaluation of mobile phone text input methods. ACM International Conference Proceeding Series, 20, 55 – 59  
  • Buyukkoten, O., Garcia-Molina, H., & Paepcke, A. (2001). Seeing the whole in parts: Text summarization for web browsing on handheld devices. Paper presented at the Intl. World Wide Web Conf.
  • Chan, S. S., Fang, X., & Brzezinski, J. (2002). Usability for mobile commerce across multiple form factors. Journal of Electronic Commerce Research, 3(3), 187-199.
  • Cheverst, K., Davies, N., Mitchell, K., Friday, A., & Efstratiou, C. (2000). Developing a context-aware electronic tourist guide: Some issues and experiences. In Proceedings of CHI2000, The Hauge, Netherlands.
  • Chin, A., & Salomaa, J. P. (2009). A user study of mobile web services and applications from the 2008 Beijing Olympics. Paper presented at the 20th ACM conference on Hypertext and hypermedia, Torino, Italy.
  • Chittaro, L., & Dal Cin, P. (2002). Evaluating interface design choices on WAP phones: Navigation and selection. Personal and Ubiquitous Computing, 6(4), 237-244.
  • Chittaro, L., & Dal Cin, P. (2001). Evaluating interface design choices on WAP phones: Single-choice list Selection and navigation among cards. Paper presented at the Mobile HCI 2001, Lille, France.
  • Chong, A. Y., Darmawan, N., Ooi, K. B., & Binshan, L. (2010). Adoption of 3G services among Malaysian consumers: An empirical analysis. International Journal of Mobile Communications, 8(2), 129-149.
  • Clarkson, E., Clawson, J., Lyons, K., & Starner, T. (2005). An empirical study of typing rates on mini-qwerty keyboards. Paper presented at the Conference on Human Factors in Computing Systems, Portland, Oregon, USA.
  • Costa, C. J., Silva, J. P., & Aparicio, M. (2007). Evaluating web usability using small display devices. Paper presented at the the 25th annual ACM international conference on Design of communication, El Paso, TX.
  • Coursaris, C., Swierenga, S., & Watrall, E. A. (2008). An empirical investigation of color temperature and gender effects on web aesthetics. Journal of Usability Studies, 3(3), 103-117.
  • Coursaris, C. K., & Kim, D. J. (2006). A qualitative review of empirical mobile usability studies. Paper presented at the 2006 Americas Conference on Information Systems (AMCIS), Acapulco, Mexico.
  • Cyr, D., Head, M., & Ivanov, A. (2006). Design aesthetics leading to m-loyalty in mobile commerce. Information & Management, 43(8), 950-963.
  • Dahlberg, T., & Öörni, A. (2007). Understanding changes in consumer payment habits – Do mobile payments and electronic invoices attract consumers? Paper presented at the HICSS 2007, 40th Annual Hawaii International Conference on  System Sciences, Waikoloa, HI.
  • Danesh, A., Inkpen, K., Lau, F., Shu, K., & Booth, K. (2001). Geney: Designing a collaborative activity for the palm handheld computer. Paper presented at the CHI2001, Seattle, WA, USA.
  • Duda, S., Schiel, M., & Hess, J. M. (2002). Mobile usability. Usability—Nutzerfreundliches web-design (pp. 173-199). Berlin: Springer-Verlag.
  • Duh, H. B.-L., Tan, G. C. B., & Chen, V. H.-h. (2006). Mobile usability: Usability evaluation for mobile device: a comparison of laboratory and field tests. Paper presented at the the 8th conference on Human-computer interaction with mobile devices and services, Stockholm, Sweden.
  • Ebner, M., Stickel, C., Scerbakov, N., & Holzinger, A. (2009). A study on the compatibility of ubiquitous learning (u-Learning) systems at university level. Universal Access in Human-Computer Interaction. Applications and Services (pp. 34-43). San Diego, CA, USA.
  • Ervasti, M., & Helaakoski, H. (2010). Case study of application-based mobile service acceptance and development in Finland. International Journal of Information Technology and Management 9(3), 243-259.
  • Fang, X., Chan, S., Brzezinski J., & Xu, S. (2003). A study of task characteristics and user intention to use handheld devices for mobile commerce. Paper presented at the the 2nd HCI in MIS Research Workshop.
  • Fitchett, S., & Cockburn, A. (2009). Evaluating reading and analysis tasks on mobile devices: A case study of tilt and flick scrolling. Paper presented at the The 21st Annual Conference of the Australian Computer-Human Interaction Melbourne, Australia.
  • Fithian, R., Iachello, G., Moghazy, J., Pousman, Z., & Stasko, J. (2003). The design and evaluation of a mobile location-aware handheld event planner. Paper presented at the the 5th International Symposium on Human-Computer Interaction with Mobile Devices and Services, Mobile HCI 2003, Udine, Italy.
  • Glass, G., McGaw, B., & Smith, M. (1981). Meta-analysis in social research. Sage Publications.
  • Goldstein, M., Alsio, G., & Werdenhoff, J. (2002). The media equation does not always apply: People are not polite towards small computers. Personal and Ubiquitous Computing, 6, 87-96.
  • Gupta, D. D., & Sharma, A. (2009). Customer loyalty and approach of service providers: An empirical study of mobile airtime service industry in India. Services Marketing Quarterly, 30(4), 342 – 364.
  • Han, S. H., Yun, M. H., Kwahk, J., & Hong, S. W. (2001). Usability of consumer electronic products. International Journal of Industrial Ergonomics, 28(3-4), 143-151.
  • Hassanein, K., & Head, M. (2003). The impact of product type on website adoption constructs. Paper presented at the the Sixth International Conference on Electronic Commerce Research (ICECR6), Dallas, Texas.
  • Heyer, C., Brereton, M., & Viller, S. (2008). Cross-channel mobile social software: An empirical study. Paper presented at the Conference on Human Factors in Computing Systems, Florence, Italy.
  • Hinckley, K., Pierce, J., Sinclair, M., & Horvitz, E. (2000). Sensing techniques for mobile interaction. Paper presented at the UIST2000, San Diego, CA, USA.
  • Hornbaek, K., & Law, E. L.-C. (2007). Meta-analysis of correlations among usability measures. Paper presented at the CHI 2007 San Jose, California, USA.
  • Hsu, C. L., Lu, H. P., & Hsu, H. H. (2007). Adoption of the mobile Internet: An empirical study of multimedia message service (MMS). Omega, 35(3), 715-726.<
  • Huang, S.-C., Chou, I.-F., & Bias, R. G. (2006). Empirical evaluation of a popular cellular phone’s menu system: Theory meets practice. Journal of Usability Studies 1(2), 91-108.
  • Hummel, K. A., Hess, A., & Grill, T. (2008). Environmental context sensing for usability evaluation in mobile HCI by means of  small wireless sensor networks. Paper presented at the Tthe 6th International Conference on Advances in Mobile Computing and Multimedia, Linz, Austria.
  • ISO. (2004). Ergonomic requirements for office work with visual display terminals. Par 11: Guidance on usability. London: International Standards Organization.
  • James, C. L., Reischel, K. M. (2001). Text input for mobile devices: Comparing model prediction to actual performance. CHI2001, Seattle, WA, USA.
  • Jeng, J. (2005). What is usability in the context of the digital library and how can it be measured? Information Technology and Libraries, 24(2), 47-56.
  • Jones, M., Buchanan, G., & Thimbleby, H. (2002). Sorting out searching on small screen devices. Paper presented at the Mobile HCI 2002, Pisa, Italy.
  • Jones, S., Jones, M., Marsden, G., Patel, D., & Cockburn, A. (2005). An evaluation of integrated zooming and scrolling on small screens. International Journal of Human-Computer Studies, 63(3), 271-303.
  • Juola, J., & Voegele, D. (2004). First time usability testing for Bluetooth-enabled devices. The University of Kansas.
  • Kaasinen, E. (2003). User needs for location-aware mobile services. Personal and Ubiquitous Computing, 7(1), 70-79.
  • Kaikkonen, A. (2005). Usability problems in today’s mobile Internet portals. Paper presented at the 2nd International Conference on Mobile Technology, Applications and Systems.
  • Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A., & Cankar, A. (2005). Usability testing of mobile applications: A comparison between laboratory and field testing. Journal of Usability Studies, 1(1), 4-16.
  • Kallinen, K. (2004). The effects of background music on using a pocket computer in a cafeteria: Immersion, emotional responses, and social richness of medium. Paper presented at the Conference on Human Factors in Computing Systems Vienna, Austria
  • Kargin, B., Basoglu, N., & Daim, T. (2009). Factors affecting the adoption of mobile services. International Journal of Services Sciences 2(1), 29-52.
  • Keeker, K. (1997). Improving web-site usability and appeal: Guidelines compiled by MSN usability research. Retrieved from http://msdn.microsoft.com/en-us/library/cc889361(v=office.11).aspx 
  • Khalifa, M., & Cheng, S. (2002). Adoption of mobile commerce: Role of exposure. Proceedings of the 35th Hawaii International Conference on System Sciences
  • Kim, H., Kim, J., & Lee, Y. (2005). An empirical study of use of contexts in the mobile Internet, Focusing on the usability of information architecture. Information Systems Frontier, 7(2), 175-186.
  • Kim, H., Kim, J., Lee, Y., Chae, M., & Choi, Y. (2002). An empirical study of the use contexts and usability problems in mobile Internet. Paper presented at the The 35th Hawaii International Conference on System Sciences
  • Kim, H. W., Chan, H. C., & Gupta, S. (2007). Value-based adoption of mobile Internet: An empirical investigation. Decision Support Systems, 43(1), 111-126.
  • Kim, S., Lee, I., Lee, K., Jung, S., Park, J., Kim, Y. B., et al. (2010). Mobile web 2.0 with multi-display buttons. Communications of the ACM, 53(1), 136-141.
  • King, S. O., & Mbogho, A. (2009). Evaluating the usability and suitability of mobile tagging media in educational settings in a developing country. Paper presented at the IADIS International Conference Mobile Learning 2009, Barcelona, Spain.
  • Kjeldskov, J., & Graham, C. (2003). A review of mobile HCI research methods. Paper presented at the The 5th International Mobile HCI 2003 conference, Udine, Italy.
  • Kjeldskov, J., Skov, M. B., & Stage, J. (2010). A longitudinal study of usability in health care—Does time heal? International Journal of Medical Informatics, 79(6), 135-143.
  • Kleijnen, M., Ruyter, K., & Wetzels, M. (2007). An assessment of value creation in mobile service delivery and the moderating role of time consciousness. Journal of Retailing, 83(1), 33-46.
  • Kofod-Petersen, A., Gransæther, P. A., & Krogstie, J. (2010). An empirical investigation of attitude towards location-aware social network service. International Journal of Mobile Communications, 8(1), 53-70.
  • Koivumäki, T., Ristola, A., & Kesti, M. (2006). Predicting consumer acceptance in mobile services: empirical evidence from an experimental end user environment. International Journal of Mobile Communications, 4(4), 418-435.
  • Koltringer, T., & Grechenig, T. (2004). Comparing the immediate usability of Graffiti 2 and Virtual Keyboard. Paper presented at the Conference on Human Factors in Computing Systems, Vienna, Austria.
  • Kowatsch, T., Maass, W., & Fleisch, E. (2009). The use of free and paid digital product reviews on mobile devices in in-store purchase situations. Paper presented at the The 4th Mediterranean Conference on Information Systems, Athens, Greece.
  • Kurniawan, S. (In Press). Older people and mobile phones: A multi-method investigation. International Journal of Human-Computer Studies.
  • Kwahk, J., & Han, S. H. (2002). A methodology for evaluating the usability of audiovisual consumer electronic products. Applied Ergonomics, 33, 419-431.
  • Langan-Fox, J., Platania-Phung, C., & Waycott, J. (2006). Effects of advance organizers, mental models and abilities on task and recall performance using a mobile phone network. Applied Cognitive Psychology, 20(9), 1143-1165.
  • Lee, C. C., Cheng, H. K., & Cheng, H. H. (2007). An Empirical study of mobile commerce in insurance industry: Task-technology fit and individual differences. Decision Support Systems, 43(1), 95-110.
  • Lee, Y. E., & Benbasat, I. (2003). A framework for the study of customer interface design for mobile commerce. International Journal of Electronic Commerce, 46(12), 48-52.
  • Lehikoinen, J., & Salminen, I. (2002). An empirical and theoretical evaluation of BinScroll: A rapid selection technique for alphanumeric lists. Personal and Ubiquitous Computing, 6, 141-150.
  • Li, W., & McQueen, R. J. (2008). Barriers to mobile commerce adoption: an analysis framework for a country-level perspective. International Journal of Mobile Communications 6(2), 231-257.
  • Li, Y. M., & Yeh, Y. S. (2010). Increasing trust in mobile commerce through design aesthetics. Computers in Human Behavior, 26(4), 673-684.
  • Liang, T. P., Huang, C. W., & Yeh, Y. H. (2007). Adoption of mobile technology in business: A fit-viability model. International Material Data System, 107 (8), 1154-1169.
  • Licoppe, C., & Heurtin, J. P. (2001). Managing one’s availability to telephone communication through mobile phones: A French case study of the development of dynamics of mobile phone use. Personal and Ubiquitous Computing, 5, 99-108.
  • Lin, M., Goldman, R., Price, K. J., Sears, A., & Jacko, J. (2007). How do people tap when walking? An empirical investigation of nomadic data entry. International Journal of Human-Computer Studies, 65(9), 759-769.
  • Lindroth, T., Nilsson, S., & Rasmussen, P. (2001). Mobile usability—rigour meets relevance when usability goes mobile. Paper presented at the IRIS24, Ulvik, Norway.
  • Ling, C., Hwang, W., & Salvendy, G. (2006). Diversified users’ satisfaction with advanced mobile phone features. Universal Access in the Information Society, 5(2), 239-249.
  • Ling, R. (2001). We release them little by little: Maturation and gender identity as seen in the use of mobile telephony. Personal and Ubiquitous Computing, 5, 123-136.
  • Lipsey, M., & Wilson, D. (2000). Practical meta-analysis. Thousand Oaks, CA: Sage Publications.
  • MacKenzie, I. S., Kober, H., Smith, D., Jones, T., & Skepner, E. (2001). LetterWise: Prefix-based disambiguation for mobile text input. Paper presented at the UIST 2001.
  • Maguire, M. (2001). Methods to support human-centered design. International Journal of Human-Computer Interaction, 55, 587-634.
  • Mallat, N. (2007). Exploring consumer adoption of mobile payments—A qualitative study. Journal of Strategic Information Systems, 16(4), 413-432.
  • Mao, E., Srite, M., Thatcher, J. B., & Yaprak, O. (2005). A research model for mobile phone service behaviors: Empirical validation in the U.S. and Turkey. Journal of Global Information Technology Management, 8(4), 7.
  • Massey, A. P., Khatri, V., & Ramesh, V. (2005). From the web to the wireless web: Technology readiness and usability. Paper presented at the the 38th Annual Hawaii International Conference on System Sciences (HICSS’05).
  • Massimi, M., & Baecker, R. M. (2008). An empirical study of seniors’ perceptions of mobile phones as memory aids. In A. Mihailidis, J. Boger, H. Kautz & L. Normie (Eds.), Technology and aging—Selected Papers from the 2007 International Conference on Technology and Aging: Vol. 21 (pp. 59-66).
  • Merisavo, M., Vesanen, J., Arponen, A., Kajalo, S., & Raulas, M. (2006). The effectiveness of targeted mobile advertising in selling mobile services: An empirical study. International Journal of Mobile Communications, 4(2), 119-127.
  • Nagata, S. F. (2003). Multitasking and interruptions during mobile web tasks. Paper presented at the Human Factors and Ergonomics Society 47th Annual Meeting.
  • Nah, F. F., Siau, K., & Sheng, H. (2005). The value of mobile applications: A utility company study. Communications of the ACM, 48(2), 85-90.
  • Nielsen, C. M., Overgaard, M., Pedersen, M. B., Stage, J., & Stenild, S. (2006). It’s worth the hassle!: The added value of evaluating the usability of mobile systems in the field. Paper presented at the 4th Nordic conference on Human-computer interaction.
  • Nielsen, J. (1993). Usability engineering.New York: AP Professional.
  • Nielsen, J., & Levy, J. (1994). Measuring usability: Preference vs. performance. Communications of the ACM, 37(4), 66-75.
  • Olmsted, E. L. (2004). Usability study on the use of handheld devices to collect census data. Paper presented at the Professional Communication Conference.
  • Pagani, M. (2004). Determinants of adoption of third generation mobile multimedia services. Journal of Interactive Marketing, 18(3), 46.
  • Palen, L., & Salzman, M. (2002). Beyond the handset: Designing for wireless communications usability. ACM Transactions on Human Computer Interaction, 9(2), 125-151.
  • Palen, L., Salzman, M., & Youngs, E. (2001). Discovery and integration of mobile communications in everyday life. Personal and Ubiquitous Computing, 5, 109-122.
  • Poupyrev, I., Maruyama, S., & Rekimoto, J. (2002). Ambient touch: Designing tactile interfaces for handheld devices. Paper presented at the UIST2002, Paris, France.
  • Pousttchi, K., & Thurnher, B. (2006). Understanding effects and determinants of mobile support tools: A usability-centered field study on IT service technicians. Paper presented at the ICMB ’06, International Conference on Mobile Business.
  • Qiu, M. K., Zhang, K., & Huang, M. (2004). An empirical study of web interface design on small display devices. Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence (WI’ 04).
  • Rodden, K., Milic-Frayling, N., Sommerer, R., & Blackwell, A. (2003). Effective web searching on mobile devices. Paper presented at the Proceedings of the 17th Annual Conference on Human-Computer Interaction, Bath, United Kingdom.
  • Rosenthal, R. (1991). Meta-analytic procedures for social research. Newbury Park, CA: Sage Publications.
  • Ross, D. A., & Blasch, B. B. (2002). Development of a wearable Computer Orientation System. Personal and Ubiquitous Computing, 6, 49-63.
  • Roto, V., Popescu, A., Koivisto, A., & Vartiainen, E. (2006). Minimap: A web page visualization method for mobile phones. Paper presented at the ACM CHI 2006, Montreal, QC, Canada.
  • Ryan, C., & Gonsalves, A. (2005). The effect of context and application type on mobile usability: An empirical study. Paper presented at the Twenty-eighth Australasian conference on Computer Science.
  • Sarker, S., & Wells, J. (2003). Understanding mobile handheld device use and adoption. Communications of the ACM, 46(12), 35-40.
  • Seth, A., Momaya, K., & Gupta, H. M. (2008). Managing the customer perceived service quality for cellular mobile telephony: An empirical investigation. Vikalpa: The Journal for Decision Makers, 33(1), 19-34.
  • Shackel, B. (1991). Usability-context, framework, definition, design and evaluation. In B. Shackel & S. Richardson (Eds.), Human Factors for Informatics Usability (pp. 21-38). Cambridge: Cambridge University Press.
  • Shackel, B. (2009, December, In Memoriam: Professor Brian Shackel 1927-2007), Usability – Context, framework, definition, design and evaluation. Interacting with Computers, 21 (5-6), 339-346.
  • Shami, N. S., Leshed, G., & Klein, D. (2005). Context of use evaluation of peripheral displays INTERACT 2005, LNCS#3585, 579-587.
  • Sodnik, J., Dicke, C., Tomazic, S., & Billinghurst, M. (2008). A user study of auditory versus visual interfaces for use while driving. International Journal of Human-Computer Studies, 66(5), 318-332.
  • Strom, G. (2001). Mobile devices as props in daily role playing. Paper presented at the Mobile HCI 2001, Lille, France.
  • Suzuki, S., Nakao, Y., Asahi, T., Bellotti, V., Yee, N., & Fukuzumi, S. (2009). Empirical comparison of task completion time between mobile phone models with matched interaction sequences. Human-Computer Interaction. Ambient, Ubiquitous and Intelligent Interaction (pp. 114-122). San Diego, CA: Springer Berlin/Heidelberg.
  • Svanæsa, D., Alsosa, O. A., & Dahla, Y. (2010). Usability testing of mobile ICT for clinical settings: Methodological and practical challenges. International Journal of Medical Informatics, 79(4), 24-34.
  • Tarasewich, P. (2003). Designing mobile commerce applications. Communications of the ACM, 46(12), 57-60.
  • Thimbleby, H., Cairns, P., & Jones, M. (2001). Usability analysis with Markov models. ACM Transactions on Computer-Human Interaction, 8(2), 69-73.
  • Thomas, P., & Macredie, R. (2002). Introduction to the new usability. ACM Transactions on Computer-Human Interaction, 9(2), 69-73.
  • UMTS-Forum. (2005). Magic mobile future 2010-2020 Report No 37. London, UK: UMTS Forum 2005.
  • Venkatesh, V., & Ramesh, V. (2006). Web and wireless site usability: Understanding differences and modeling use. MIS Quarterly, 30(1), 181-206.
  • Venkatesh, V., Ramesh, V., & Massey, A. P. (2003). Understanding usability in mobile commerce. Communications of the ACM, 46(1246), 53-56.
  • Wang, W., Zhong, S., Zhang, Z., Lv, S., & Wang, L. (2009). Empirical research and design of M-learning system for college English. Learning by playing. Game-based education system design and development (pp. 524–535). Berlin/Heidelberg: Springer.
  • Waterson, S., Landay, J. A., & Matthews, T. (2002). In the lab and out in the wild: Remote web usability testing for mobile devices. Paper presented at the Conference on Human Factors in Computing Systems Minneapolis, Minnesota, USA
  • Wigdor, D., & Balakrishnan, R. (2003). TiltText: Using tilt for text input to mobile phones. Paper presented at the the 16th Annual ACM UIST Symposium on User Interface Software and Technology.
  • Wu, J.-H., & Wang, S.-C. (2005). What drives mobile commerce? An empirical evaluation of the revised technology acceptance model. Information and Management 42 (5), 719-729.
  • Xu, D. J., Liao, S. S., & Li, Q. (2008). Combining empirical experimentation and modeling techniques: A design research approach for personalized mobile advertising applications Decision Support Systems, 44(3), 710-724.
  • Yuan, Y., & Zheng, W. (2005). Stationary work support to mobile work support: A theoretical framework. Paper presented at the International Conference on Mobile Business (ICMB 2005) Sidney, Australia.

Appendix

Appendix: Formations and Dimensions of Usability

The appendix materials are included at the end of the article. Please download the article below.

Download Article

A Meta-Analytical Review of Empirical Mobile Usability Studies

Item added to cart.
0 items - $0.00