Shi.indd A Theory-guided Approach to Library Services Assessment 1 Xi Shi and Sarah Levy This article examines the theoretical models applied to date in library assessment activities. A brief review of the history of library assessment practices and the evolution of their respective approaches is presented. A discussion of the theoretical concepts applied to these assessment activities in library and information science (LIS) as introduced from other fields, such as marketing and management information systems (MIS), fol- lows.The conceptual issues and practical concerns in libraryassessment are then discussed. Focus is placed on the review of research concepts of service quality, customer/user satisfaction, and their applications in library assessment activities. ver the past decade, both academics and practitioners in the field of library and in- formation science (LIS) have increasingly recognized the significance of assessing library services. Library as- sessment applications have been encour- aged at all scales, massive amounts of data have been collected and published, and processes and results have been reported. However, it is surprising that li le comprehensive analysis of the cur- rent library assessment tools has been performed. This article describes the most popular assessment approaches seen in academic libraries. It provides a review of assessment theories introduced into LIS for library assessment applications from other fields, such as marketing and management information systems (MIS). Background information on research models and their research concepts such as SERVQUAL, designed to measure service quality, and the disconfirmation model, used in marketing research to pre- dict customer satisfaction/dissatisfaction (CS/D), are provided and discussed. This study presents a review of the conceptual and practical aspects of LibQUAL+™, a recent assessment tool that evolved from SERVQUAL and whose use is widely en- couraged in libraries, academic and public alike. Relevant terminologies, such as user satisfaction, service quality, and customer/ user needs and expectations, are discussed and clarified. Finally, recommendations for future research and practical library assessment activities are offered. Xi Shi is the Head Librarian at SUNY Rockland Community College; e-mail: xshi@sunyrockland.edu. Sarah Levy is an Assistant Professor of Library Services and Head of Access Services at SUNY Rockland Community College; e-mail: slevy@sunyrockland.edu. 266 mailto:slevy@sunyrockland.edu mailto:xshi@sunyrockland.edu A Theory-guided Approach to Library Services Assessment 267 Library Information Services Assessment In the past ten years, the library has expe- rienced an evolution of service assessment in its daily operations, driven largely by the advancement of information technolo- gy in managing library systems, as well as a conceptual change of higher education standards. Undoubtedly, the importance of assessment is recognized not only by the institution and library administrators, but also by librarians. All understand that evaluating and improving information services to meet user demands is essential to successfully support the educational goals and the daily teaching and learning activities of the institution. In addition, as information technologies are devel- oping at a rapid and erratic pace, library services have to be evaluated constantly and changes to service orientations and service delivery mechanisms need to be made accordingly. History and Evolution of Library Assessment Practices The beginning of library service as- sessment can be traced to the irregular collection of statistics, such as daily circulation counts, reference questions answered, books ordered and cataloged by day, month, or year. Very o en, the collected statistics were considered the end results. No further analysis or follow- up measures were taken. Three features characterize this assessment approach: first, it is a perception of the service pro- vider (e.g., the librarians or library staff); second, it is a description of phenomena (e.g., how many books are checked out on a given day); and third, it is a one-way application that ends at statistics collec- tion. Although it may provide managers and librarians with valuable information (e.g., the price increases of serials over a given year), these sporadic statistics alone do not provide any meaningful guide for systematic service improvement. As library services began to develop in tandem with emerging IS technologies, researchers in the field of LIS, as well as librarians, recognized that irregular ser- vice statistics alone are insufficient for as- sessing library operations. To obtain valid results, library users must be involved in the assessment process. Subsequently, a more systematic approach to data col- lection began to be researched and new methodologies emerged. Data collection in various forms has now been applied in LIS. Institutionally created surveys, focus group interviews, and complaint analysis are just some examples of data collection methods that have been employed.1 In recent years, questionnaires with different purposes have been created and used as an instrument and the results have been reported. These questionnaires have been distributed to students, faculty, and other library users. A er data are collected, they are o en aggregated and presented in more interpretable formats, such as descriptive statistical tables and charts. This method represents considerable progress in LIS from earlier, nonsystem- atic statistics collection in the following three ways: 1. It shi ed from just the service pro- vider point of view to include the user’s perspective. 2. It indicates early planning and designing of assessment activities. 3. It incorporates user involvement as part of assessment. Now, it is recognized in LIS, as in other service industries, that user perceptions of service quality, user expectations, and user satisfaction are essential elements of any service assessment activity. Assessment Theories and Tools Employing user evaluation of library ser- vices is now a well-accepted concept. The number of user studies increased greatly a er the 1980s, resulting in an enormous quantity of data.2 However, libraries and researchers now faced a different problem. What could be done with the collected data? Even with all the potential information data may provide, empirical evidence shows that massive quantities of data alone do not provide standards 268 College & Research Libraries May 2005 to measure service quality, nor do raw data predict library user satisfaction or suggestions for future directions. If the intention of assessment is to utilize the outcomes to measure organi- zational effectiveness, the tools used to perform such a task need to be designed carefully, in particular for library services. Nonacademic and commercial informa- tion service providers are now competing in the information marketplace. Libraries are being challenged to maintain cut- ting-edge IS technologies. The strength of academic institutions has always been in their reliance on research that provides findings to identify competitive advan- tages and suggests approaches to success. Please note that “research” implies the ap- plication of scientific method and should contain two key components: method- ology and purpose. The methodology component includes the collection and analysis of data, and the purpose compo- nent includes the formulation, revision, and rejection of hypotheses. Conclusions and recommendations are then made based on analysis of these data.3 As library service assessment processes develop and progress, both researchers and librarians have started examining current practices, searching for and ex- perimenting with be er assessment tools. One example is the growing participation in LibQUAL+™, a library assessment tool that began at Texas A&M University Libraries and later partnered with ARL, and now has hundreds of participants.4 Because LibQUAL+™ is an expansion of SERVQUAL, a marketing service quality measure now widely used in many other fields, the applicability of SERVQUAL in library assessment and the theoretical issues and practical concerns of LibQUAL+™ merit serious examination. Research Models and the Adaptation of Assessment Tools Librarians, as well as many researchers in LIS, frequently are not exposed to the concept of “modeling.” It would be useful to begin by understanding why assessment activities should be guided by research models before examining each model used in LIS and discussing the applicability and specifics of different conceptualizations. Why Modeling? As discussed previously, the library has progressed from irregular statistics collec- tion, a piecemeal approach in evaluating services, to the study of users and user satisfaction, to systematic data collection and analysis. Practical assessment activities evidently underwent an evolution, which placed new demands on research obliga- tions in the field of LIS. Library assessment activities should not be any different from assessment activities in other fields if the findings and interpretations of data from any given library are to be used to general- ize and explain the service conditions and predict service success in library manage- ment. Without rigid design and careful test- ing of repeated practical activities, the data collected may only be able to display the phenomena of one given service area for one given period of time and thus unable to offer explanations of phenomena occurring under different conditions. In addition, a solid validation of the research instrument, as well as rigid procedures upon which the findings are based, must be in place for interpretations to be reliable. The following section reviews models that have been introduced to LIS from other fields. Full background information of each model’s development and con- structs and their definitions are discussed in comparison with practical assessment activities in library se ings. SERVQUAL SERVQUAL was first introduced to evaluate service quality in the field of marketing in 1985.5 The pioneers who introduced SERVQUAL recognized that although quality in tangible goods had been described and measured by mar- keters, quality in services was largely undefined and unresearched. Therefore, the purpose of SERVQUAL was to: A Theory-guided Approach to Library Services Assessment 269 • Identify the difference(s) between tangible goods (e.g., a car or a camera) and services (e.g. the retail or banking industry) in terms of measurement of quality and services provided • Define the measures used to opera- tionalize the constructs in service quality research • Examine the determinants that characterize the service quality Marketing researchers are in agree- ment most services are comprised of multiple components, but that it differs considerably from tangible products.6 Although the quality of tangible products usually can be measured objectively by indicators such as durability, style, color, label, feel, package, and fit, as well as by the number of defects, service quality is an abstract and elusive construct.7 An- other distinguishing feature of a service as opposed to a tangible product is that most services are comprised of multiple components and each component may have its own unique result of an outcome evaluation.8 Constructs and Their Definitions As SERVQUAL was designed to measure service quality, the term “service qual- ity” is a major construct in SERVQUAL research. A. Parasuraman, Leonard L. Berry, and Valarie A. Zeitheml described service quality as being characterized by three themes: • Service quality is more difficult for the consumer to evaluate than (tangible) goods quality. • Service quality perceptions result from a comparison of consumer expecta- tions with actual service performance. • Service quality evaluations are not made solely on the outcome of a service; they also involve evaluations of the pro- cess of service delivery.9 The first theme is evidenced by re- search findings in varied fields; examples include library services, higher education, health care, and professional consult- ing.10–13 The second theme defines service quality as a result of the consumer ’s subjective comparison of his or her pre- consumption expectations of the service with the actual experience of the service consumed. Please note this definition coincides with the definition of customer satisfaction in marketing literature, which is discussed in the next section of this study. The third theme distinguishes the quality of service contents from the service delivery process. For example, the service content of a class offered by a college refers to the contents of the lecture, its comprehensiveness, coverage, and so on. On the other hand, the quality of the service delivery process for this class may include the teaching mechanism used by the professor, the format of instruction (distance learning, classroom teaching), and so on. Disconfirmation Theory The most popular and widely used model for studying customer satisfaction and dissatisfaction (CS/D), disconfirmation of expectations, is derived from the field of marketing. The original concept of disconfirmation theory posits that cus- tomers evaluate the merchandise and the purchase experience based on some cognitive standards before the purchase is made, such as expectations. CS/D results from a comparison with the merchandise purchased, indicating whether it is be er than or less than what was expected. Basi- cally, the disconfirmation of expectation paradigm conceptualizes CS/D as the following process: disconfirmation is the customer’s evaluation of a product’s performance relative to his or her expecta- tions. When performance is greater than expectation, resulting in positive discon- firmation, customers are satisfied. When performance falls short of expectations, resulting in negative disconfirmation, customers are dissatisfied. Confirmation occurs when performance and expecta- tions correspond, resulting in moderate satisfaction or indifference.14 Although the disconfirmation of expectation paradigm is still the most widely used model for studying CS/D, http:indifference.14 270 College & Research Libraries May 2005 it has been criticized for its limitations. Marketers have found that expectation may not always be the best prepurchase standard to predict the influences on the customer’s evaluation of the purchases. Alternatives have been researched and findings have been reported. For example, desire as a prepurchase standard was reported to be a more powerful predic- tor than expectation in certain purchas- ing situations.15 In studying library user satisfaction, information needs may be a be er prepurchase standard to apply in disconfirmation model.16 Constructs and Their Definitions As the basic concept of the disconfirma- tion model describes CS/D as an evaluative comparison process between a customer’s pre- and postpurchase experience, three major components are evident: • Prepurchase standards, also re- ferred to in marketing literature as dis- confirmation standard • Perceived performance • Disconfirmation Disconfirmation Standard As discussed earlier, expectation is cur- rently the most widely used disconfirma- tion standard. In behavioral science and marketing, definitions of expectations can be divided into three categories. 1. Customer’s prior experiences with similar product or services 2. Experience of other customers who serve as referent persons 3. Situationally produced expecta- tions, such as manufacturer promotion or retailer advertisement.17 The library user ’s expectations in relation to using information services is believed to be formed from prior experi- ences with similar information-seeking and retrieval activities and/or from the experience of other users who serve as referent persons.18 In comparison with ex- pectation, desire and need also have been used as disconfirmation standards and are reported to have significant effects on a customer’s evaluation results.19 Perceived Performance Perceived performance refers to the customer’s perception of the quality of the product or service a er it is consumed. This does not involve any comparison process. Rather, it is a subjective assess- ment made by an individual of a product or service based on his or her perception of what is given and what is received.20 Disconfirmation Disconfirmation is generally defined as the discrepancy between the actual prod- uct/service received and what is expected (or desired/needed). However, the opera- tionalizations reported in the literature show several conceptualizations. For example, some research used a subjective assessment of the difference between the standard and the performance.21 In other research, a difference score was obtained from performance minus standard.22 Still other research uses the additive difference model, which is specified as a comparison between the level expected (or desired/needed) and the level received and is then weighted by an evaluation of the difference.23 LibQUAL+™ LibQUAL+™ for library assessment pur- poses was developed based on the theory of SERVQUAL, which was designed to measure service quality across the service industries. Research findings from the SERVQUAL literature include studies of retail stores, banks, hospitals, Internet providers, and many other types of service industries. SERVQUAL, first introduced by Parasuraman, Zeithaml, and Berry in 1985, is one of the most heavily cited studies of its kind. It has proved to have an established research history, and its merits and limitations have been widely tested and confirmed by both repeated practical activities across service industries and re- search findings from many service areas.24 A refined SERVQUAL scale later offered by Parasuraman, Zeithaml, and Berry included five dimensions—tangibles, reliability, responsiveness, assurance, and http:areas.24 http:difference.23 http:standard.22 http:performance.21 http:received.20 http:results.19 http:persons.18 http:advertisement.17 http:model.16 http:situations.15 A Theory-guided Approach to Library Services Assessment 271 empathy—characterized by twenty-two items.25 When used in studying different industries, the wording of individual items in the measure may be adjusted in the actual instrument for specific service assessment. L i b Q U A L + ™ , e x p a n d e d f r o m SERVQUAL, now recognized as a stan- dard tool for measuring library services, is still a comparatively young assessment measure. Developed along the same framework as SERVQUAL, LibQUAL+™ also applied these five dimensions with its scales worded specifically to measure library services.26 Constructs and Their Definitions As previously discussed, the conceptual- izations and dimensions of LibQUAL+™ w e r e d e r i v e d f r o m S E R V Q U A L . LibQUAL+™ is introduced to the library user as an online questionnaire. The terms “expectations,” “needs,” and “library services (quality)” are introduced on the first page of the LibQUAL+™ survey form with this opening statement: “We are commi ed to improve your library services. Be er understanding your ex- pectations will help tailor those services to your needs.”27 Brief definitions of the following three terms are provided to assist respondents in completing the questionnaire: • Minimum: The number that repre- sents the minimum level of service that you would find acceptable • Desired: The number that repre- sents the level of service that you person- ally want • Perceived: The number that repre- sents the level of service that you believe our library currently provides Respondents are asked to rate the stated service areas from the above listed three contexts: minimum, desired, and perceived service performance. On a con- tinuum, the minimum and desired ser- vices appear at either end, with the area in between known as the zone of tolerance. Both minimum and desired ratings are used as expectation measures.28 Model Comparisons in Library Applications If the primary goal of library service assessment is to identify deficiencies in order to make improvements according to the information received from users’ evaluations, the models applied to this task need to be evaluated. A discussion of the merits and limitations of each model needs to be pursued. Disconfirmation and SERVQUAL The disconfirmation model to identify the determinants of customer satisfaction has been used mainly to research product consumption. In service research, well- developed and standardized constructs are found to describe service areas across all research industries. On the other hand, SERVQUAL was designed as a tool to assess only services. Even “tangibles” in SERVQUAL refer to the physical evidence of the services, such as physical facilities, appearance of personnel and tools, or equipment used to provide the services. SERVQUAL was not designed to measure product(s) or both services and product(s). For example, Parasuraman, Berry, and Zeithaml used SERVQUAL to measure the quality of services of banks.29 The five dimensions (tangibles, reliability, responsiveness, assurance, and empathy) used in their study only measured the bank’s service components. The product component(s) of the banks were not measured. The product components of a bank may include the programs the bank offers, such as checking, saving, and/or investment accounts; and the features of its products, such as mortgage rates and va- riety of IRAs. Please note that in the study of Parasuraman, Berry, and Zeithaml, the scale “tangibles” included four items: • P1. XYZ has modern-looking equip- ment. • P2. XYZ’s physical facilities are visually appealing. • P3. XYZ’s employees are neat ap- pearing. • P4. Materials associated with the service (such as pamphlets or statements) http:banks.29 http:measures.28 http:services.26 http:items.25 272 College & Research Libraries May 2005 are visually appealing at XYZ. These tangible dimension items are clearly not designed to measure bank products.30 The uniqueness of library service assessment is that any tools measuring only either product or service cannot completely assess the overall quality of services provided. Library services quality is a combination of the quality of information provided by the library (e.g., comprehensiveness, appropriateness, and format) and the services offered by the li- brary (e.g., physical facilities, helpfulness, and a itude of library staff). Conceptual Issues of LibQUAL+™ As LibQUAL+™ is currently the most popular and widely used assessment tool in American libraries, its theories and ap- plications in library assessment processes warrant further analysis. As previously noted, LibQUAL+™ was introduced into LIS as an expansion of SERVQUAL. Accordingly, consumer (library user) perceived quality of library services in LibQUAL+™ is the consumer’s (library user ’s) judgment about his or her overall experience with the library’s services. This determination is made based on the degree and direction of discrepancy between the consumer ’s (library user’s) perceptions and expecta- tions. Therefore, the operationalization of the model is defined as Q = P – E, with Q representing perceived quality of the item and P and E representing the ratings on the corresponding perception and expec- tation statements, respectively.31 LibQUAL+™ presented a different conceptualization with the constructs “minimum” and “desired” level being used to compare with library users’ perceptions of service quality.32 Users’ perceptions are proposed to anchor somewhere between the “minimum” and “desired” level. According to the current conceptualization, LibQUAL+™ lacks clarification in the following regards: First, what are the positions and proposi- tions of “minimum” and “desired” levels in the Q = P – E equation? Second, is each of their interpretations compatible with the mathematical properties using P – E equation with the perceived quality specification? Third, operationalization of the current LibQUAL+™ using either “minimum” or “desired” level in the equation is conceptually differentiated from the frameworks suggested in the original service quality research and also differentiated from the disconfirmation of expectation concept specified in the mar- keting literature. Therefore, justifications of this measurement for library service quality as a research tool are needed. To address all aspects of library services sufficiently, the current LibQUAL+™ is not yet an adequately developed tool to measure and represent a dependable library services assessment result. Furthermore, service quality, a key con- struct used in LibQUAL+™, needs clarifi- cation. As conceptualized in SERVQUAL, service quality can be measured by the equation of Q = P – E. Although research applications and findings of service qual- ity vary from project to project, using SERVQUAL as a theoretical foundation, the consensus has been that the dis- confirmed expectation is a predictor of perceived service quality, which specifies that a more positive score of P – E indi- cates higher quality and a more negative score indicates lower quality. If this is the theoretical framework based on which LibQUAL+™ is developed, the current gap theory applied in LibQUAL+™ is in- consistent with the SERVQUAL concept, where a negative score is common be- tween “perceived” and “desired” service level. Accordingly, a positive perception of service quality may result when the perceived level falls below the desired level of services. In addition, the definitions of the tested constructs expectations and needs are confusing. On the first page of the survey form, library users are greeted and intro- duced to LibQUAL+™ by “We are com- mi ed to improve your library services. Be er understanding your expectations will help tailor those services to your http:quality.32 http:respectively.31 http:products.30 A Theory-guided Approach to Library Services Assessment 273 needs.” Are expectations and needs in- terchangeable in LibQUAL+™? In both the disconfirmation and SERVQUAL models, the constructs of expectations and needs/desires are clearly defined, though the definitions varied considerably from study to study. One concept that most disconfirmation and SERVQUAL re- searchers agree on is that expectations and needs/desires both may be used as disconfirmation standards, but they are two distinct constructs. For example, a student’s need/desire to obtain a book for a class is not identical with his or her expectation to obtain a book for a class. Methodological Issues Whereas conceptual issues need to be addressed by research design, the methodological issues concern data col- lection, analysis, and interpretation. The following section uses the SUNY (State University of New York) spring 2003 sur- vey results as an example to illustrate two methodological issues of LibQUAL+™. Sample Representation LibQUAL+™ is purportedly designed to measure library services across a broad spectrum of libraries serving users of all types with different perspectives. How- ever, a consistently low response rate has been found across libraries. In the SUNY system in 2003, many campuses reported their response rate within a range of 0.3 to 4.9 percent. Many of these collected responses were from a predetermined sample, not the school population. Al- though there is no rule of thumb as to what number represents a good response rate, Parasuraman, Berry, and Zeithaml reported a response rate of 21 percent in their article “Refinement and Reassess- ment of the SERVQUAL Scale.”33 Many disconfirmation studies in a controlled situation reported their findings based on response rates of around or greater than 50 percent.34 Because LibQUAL+™ researchers use 10 percent as a guideline, and assuming that the 10 percent response of the sample represents the demographi- cal pa ern of the population (age, gender, discipline, etc.), the question still remains: Can we trust the data collected from less than five percent of the sample, which may be drawn from less than 50 percent of the population? Furthermore, in addi- tion to the demographical representation, sample bias also may include the use pa ern of the library services, such as on- site users versus distance learners; paper material readers versus Internet users, and so on. As many libraries included only Web-returned survey forms, users who did not prefer to answer surveys on the Internet were excluded. If these conditions are true, can we comfortably use the results as the interpretation of our service quality as representing our entire clientele? These concerns deserve further attention. Data collection procedures must be rigidly refined before reliability and validity reports can generate any meaning. Data Analysis and Interpretation Because LibQUAL+™ is still a young assessment concept, many partaking libraries are first-time participants. As in all complex research experimentations, LibQUAL+™ requires knowledge and understanding of experimental designs, reliability, validity, statistic manipula- tion, and interpretation of outcomes. Librarians and administrators alike need to understand that descriptive statistics alone, including easy reading charts and bars, do not provide explanations of relationships, especially causal rela- tionships, between and among tested variables and dimensions. Determinants of service quality perception cannot be identified by descriptive statistics. As a result, many libraries cannot draw theoretically supported guidelines from their LibQUAL+™ assessment activities to determine areas for improvement and to propose directions for future management. To further illustrate this point, the following example is used. The descriptive statistics from LibQUAL+™, such as minimum, maximum, mean, and http:percent.34 274 College & Research Libraries May 2005 standard deviation, of “print and/or elec- tronic journal collections I require for my work (AI-Q3)” may not explain or predict more or less positive perception of the dimension (AI) “access to information.” Furthermore, descriptive statistics alone cannot explain why users perceive certain service items as indicated. For example, descriptive statistics of community college students show a much higher “perceived mean” of all measured items in compari- son to graduate students. However, the “minimum mean” is not considerably higher than that rated by graduate stu- dents. According to LibQUAL+™ theory, these descriptive statistics imply that the service quality of graduate school librar- ies, defined by “adequacy” mean (the gap scores between the “perceived” and “minimum” level) is not as good as the service quality provided by community college libraries. If this interpretation does not reflect real library practices, the research design and the data analysis need refinement. Recommendations A review of the service quality and mar- keting literature presented in this paper and discussions of the dynamics of library services assessment suggest that library service quality evaluation is considerably more complex than conceptualized in the current LibQUAL+™. Following are some recommendations for consideration of a research- guided approach. Refine LibQUAL+™ Conceptualization One research obligation is to propose— and then test and confirm—the causal relationships to explain phenomena and predict future behavior and thus improve human management of performance. Be- cause LibQUAL+™ is designed as a tool to measure library service quality using the gap theory, a research model needs to present propositions and hypothesize paths that identify the determinants of the library service quality perception. When the framework is established, the data collected will be used to test whether the propositions should be accepted or reject- ed. Because LibQUAL+™ has been used as a standard tool in library assessment activi- ties, repeated findings can be used to con- firm and purify the LibQUAL+™ model. Research findings from disconfirmation and SERVQUAL have provided a solid foundation to tighten up the LibQUAL+™ model. Clarify Constructs Applying SERVQUAL measures to as- sess library service quality, the ambiguity of “gaps” in LibQUAL+™ needs to be eliminated. Reconceptualizing “gaps” as the properties of each construct must be clearly identifiable based on existing research in marketing and service quality research in particular. Both expectations and needs should be considered and specified as alternative comparison stan- dards in library service consumption situ- ations. Because many marketers define expectations as what consumers believe they should and will receive, needs are what consumers want or wish to receive. Including user needs as one component in a library service quality research model is especially important because traditional library research and current library man- agement practice o en view the needs of library users as the justifications for the existence of certain services, as well as determinants for future creation and/or improvement of services. Redefine the Scales The literature review and the discussions presented in this paper suggest that library services include two distinct components: the information product (i.e., the content and quality of the information) and the service components (including the facili- ties and the computerized and human as- sistance that deliver the information prod- uct to its users).35 Because SERVQUAL specifies “tangibles” and “intangibles” as two major components when measuring the quality of many types of services, the measures for information services should address similarity and uniqueness in an http:users).35 A Theory-guided Approach to Library Services Assessment 275 information consumption situation. Re- search on products bundled with services in a way similar to information consump- tion can be found in current literature. For example, the product of a restaurant is its food. However, the customer’s perception of the service quality of the restaurant is not based solely on the quality of its food but also on the accompanying services, such as the speed of the service, décor of the restaurant, friendliness of the staff, and son on.36 On the other hand, the uniqueness of library service measurement features is that neither the information product nor its delivery services can be clearly classi- fied as either “tangibles” or “intangibles.” The conceptualization of library service as comprising two distinct components can provide more reasonable interpretations as to why “print/electronic journal collec- tions (AI-1),” “the printed library materials (AI-3),” and “the electronic information resources (AI-4)” do not fit well in data analysis with the two other items in that dimension (AI): “convenient service hours (AI-2)” and “timely document delivery (AI-5)” because the former three items represent the information product and the la er two represent the services by which the information product can be a ained. Conclusion The purpose of this paper is to provide an overview of library service assess- ment practices, an examination of the LIS research model development, and analysis of the measures applied in the library outcome assessment activities. A review of applications of research models for measuring service qual- ity in other service industries has lent support for the design of measures for library service quality. LibQUAL+™, a widely recognized assessment tool in libraries, has been used as an example in this study for analysis of existing library service quality measures. Merits and limitations of LibQUAL+™ have been investigated. Both conceptual and empirical issues are addressed. Recom- mendations are offered for be er devel- opment of a research-guided approach that can be used to identify refinements for more reliable measures and to steer practical assessment activities in libraries. Employing a research-guided approach allows libraries to evaluate their services systematically, identify any areas for improvement effectively, and thus manage their daily operations successfully. Notes 1. Peter Hernon, “Quality: New Directions in Research,” Journal of Academic Librarianship 28 (2002): 224–31. 2. J. M. Bri ain, “Pitfalls of User Research, and Some Neglected Areas,” Social Science Informa- tion Studies 2 (1982): 139–48. 3. Xi Shi, “An examination of user satisfaction formation process” (Ph.D. dissertation, Stevens Institute of Technology, Hoboken, N.J., 2000). 4. A&M University Libraries and ARL Receive NSF Grant for Digital Libraries Assessment. Available online from h p://www.arl.org/libqual/geninfo/nsdlpr.html. [Accessed 20 October 2004]; Colleen Cook, Fred Heath, and Bruce Thompson, “A New Culture of Assessment: Pre- liminary Report on the ARL SERVQUAL Survey,” Proceeding of the 66th IFLA Council and General Conference. Available online from h p://www.ifla.org/IV/ifla66/papers/028-129e.htm. [Accessed 20 October 2004]; Danuta Nitecki, “SERVQUAL: Measuring Service Quality in Academic Librar- ies.” Available online from h p://www.arl.org/newsltr/191/servqual.html. [Accessed 20 October 2004]. 5. A. Parasuraman, Leonard L. Berry, and Valarie A. Zeithaml, “A Conceptual Model of Service Quality and Its Implications for Future Research,” Journal of Marketing 49(1985): 41–50; ———, “SERVQUAL: A Multiple-item Scale for Measuring Consumer Perceptions of Service Quality,” Journal of Retailing 64(1988): 12–40. 6. Ruth N. Bolton and James H. Drew, “A Multistage Model of Customers’ Assessments of Service Quality and Value,” Journal of Consumer Research 17 (Mar. 1991): 375–84; Diane Halstead, David Hartman, and Sandra L. Schmidt, “Multisource Effects on the Satisfaction Formation 276 College & Research Libraries May 2005 Process,” Journal of the Academy of Marketing Science 22 (spring 1994): 114–29; Richard Spreng and Thomas J. Page Jr., “The Impact of Confidence in Expectations on Consumer Satisfaction,” Psychology & Marketing 18 (2001): 1187–1205. 7. Richard L. Oliver and L. Gerald, “Effect of Satisfaction and Its Antecedents on Customer Preference and Intention,” Advances in Consumer Research 8 (1981): 88–93; R. L. Oliver, “Cognitive, Affective and A ribute Bases of the Satisfaction Response,” Journal of Consumer Research 20 (Dec. 1993): 418–30. 8. Halstead, Hartman, and Schmidt, “Multisource Effects on the Satisfaction Formation Process”; Shi, “An examination of user satisfaction formation process.” 9. Parasuraman, Berry, and Zeithaml, “A Conceptual Model of Service Quality and Its Im- plications for Future Research.” 10. Xi Shi, Patricia J. Holahan, and M. Peter Jurkat, “Satisfaction Formation Processes in Library Users: Understanding Multisource Effects,” Journal of Academic Librarianship 30 (2004):122–31. 11. Halstead, Hartman, and Schmidt, “Multisource Effects on the Satisfaction Formation Process.” 12. Jagdip Singh, “A Multifacet Typology of Patient Satisfaction with Hospital Stays,” Journal of Health Care Marketing 10 (1990): 8–21. 13. Richard A. Spreng, Sco B. Mackenzie, and Richard W. Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction,” Journal of Marketing 60 (1996): 15–33. 14. G. A. Churchill and C. Surprenant , “An Investigation into the Determinants of Consumer Satisfaction,” Journal of Marketing Research 19 (1982): 491–504; Richard L. Oliver, “A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions,” Journal of Marketing Research 17 (Nov. 1980): 460–70;Richard A. Spreng, and Richard W Olshavsky, “A Desires Congruency Model of Consumer Satisfaction,” Journal of Academy of Marketing Science 21 (1993):169–77; Spreng, Mackenzie, and Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction”; J. B. Barbeau, “Predictive and Normative Expectations in Consumer Satisfaction: A Utilization of Adaptation and Comparison Levels in a Unified Framework,” in Conceptual and Empirical Contribu- tions to Consumer Satisfaction and Complaining Behavior, ed. H.K. Hunt and R.L. Day (Bloomington: Indiana University School of Business, 1985). 15. Spreng and Olshavsky, “A Desires Congruency Model of Consumer Satisfaction”; Spreng, Mackenzie, and Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction”; Barbeau, “Predictive and Normative Expectations in Consumer Satisfaction.” 16. Shi, Holahan, and Jurkat, “Satisfaction Formation Processes in Library Users.” 17. Youjae Yi, “A Critical Review of Consumer Satisfaction,” in Review of Marketing, ed. V. A. Zeithaml (Chicago: American Marketing Association, 1990). 18. Ibid.; Shi, Holahan, and Jurkat, “Satisfaction Formation Processes in Library Users.” 19. Spreng and Olshavsky, “A Desires Congruency Model of Consumer Satisfaction”; Spreng, Mackenzie, and Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction”; Barbeau, “Predictive and Normative Expectations in Consumer Satisfaction.” 20. Bolton and Drew, “A Multistage Model of Customers’ Assessments of Service Quality and Value”; Parasuraman, Berry, and Zeithaml, “SERVQUAL.” 21. Spreng and Olshavsky, “A Desires Congruency Model of Consumer Satisfaction”; Spreng, Mackenzie, and Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction”; Barbeau, “Predictive and Normative Expectations in Consumer Satisfaction.” 22. R. A. Westbrook and M. D. Reilly, “Value–Precept Disparity: An Alternative to the Discon- firmation of Expectations Theory of Consumer Satisfaction,” in Advances in Consumer Research, ed. R. P. Bagozzi and A .M. Tybout (Ann Arbor, Mich.: Association for Consumer Research, 1983), 256–61. 23. Amos Tversky, “Intransitivity of Preferences,” Psychological Review 76 (1969): 31–48. 24. Parasuraman, Berry, and Zeithaml, “A Conceptual Model of Service Quality and Its Impli- cations for Future Research”; ———, “SERVQUAL”; ———, “Refinement and Reassessment of the SERVQUAL Scale”; J. Joseph Cronin Jr. and Steven A. Taylor, “SERPERF versus SERVQUAL: Reconciling Performance-based and Perceptions-minus-expectations Measurement of Service Quality,” Journal of Marketing 58 (1994): 125–31; Leyland F. Pi , Richard T. Watson, and C. Bruce Kavan, “Service Quality: A Measure of Information Systems Effectiveness,” MIS Quarterly 19 (June 1995): 173–87; Anol Bha acherjee, “Understanding Information Systems Continuance: An Expectation–Confirmation Model,” MIS Quarterly 25 (2001): 351–70. 25. Parasuraman, Berry, and Zeithaml, “Refinement and Reassessment of the SERVQUAL Scale.” 26. Colleen Cook, Vicki Coleman, and Fred Heath, “SERVQUAL: A Client-based Approach to Developing Performance Indicators,” 3rd Northumbria International Conference on Performance Measurement in Libraries and Information Services, 27–31 August 1999); Nitecki, “SERVQUAL.” 27. LibQUAL +; General FAQ. Available online from h p://www.arl.org/libqual/geninfo/faq- A Theory-guided Approach to Library Services Assessment 277 gen.html. [Accessed 10 March 2004.] 28. Hernon, “Quality.” 29. Parasuraman, Berry, and Zeithaml, “SERVQUAL.” 30. ———, “Refinement and Reassessment of the SERVQUAL Scale.” 31. Spreng and Olshavsky, “A Desires Congruency Model of Consumer Satisfaction”; Spreng, Mackenzie, and Olshavsky, “A Reexamination of the Determinants of Customer Satisfaction”; Anjana Susarla, Anitesh Barua, and Andrew B. Whinston, “Understanding the Service Component of Application Service Provision: An Empirical Analysis of Satisfaction with ASP Services,” MIS Quarterly 27 (2003): 91–123; L. F. Pi , R. T. Watson, and C. Kavan, “Service Quality: A Measure of Information Systems Effectiveness,” IS Quarterly 19 (1995): 173–87; Hernon, “Quality.” 32. Cook, Coleman, and Heath, “SERVQUAL”; Shi, Holahan, and Jurkat, “Satisfaction Forma- tion Processes in Library Users.” 33. Parasuraman, Berry, and Zeithaml, “Refinement and Reassessment of the SERVQUAL Scale.” 34. Ernest R. Cado e, Robert B. Woodruff, and Roger L. Jenkins, “Expectations and Norms in Models of Consumer Satisfaction,” Journal of Marketing Research 24 (Aug. 1987): 305–14. 35. Xi Shi, Patricia J. Holahan and M. Peter Jurkat, “Satisfaction formation processess in library users: understanding multisource effects,” Journal of Academic Librarianship 30 (2004). 36. John E. Swan & I. Fredrick Trawick, “Satisfaction related to predictive vs. desired expecta- tions and complaining behavior” in Refining Concept and Measures of Consumer Satisfaction and Complaining Behavior, ed. H. Keith Hunt and Ralph L. Day (Bloomington IN: Indiana University, 1980), 7–12.