359 Measuring Perceptual (In) Congruence between Information Service Providers and Users Crystal Boyce* Library quality is no longer evaluated solely on the value of its collections, as user perceptions of service quality play an increasingly important role in defining overall library value. This paper presents a retooling of the LibQUAL+ survey instrument, blending the gap measurement model with perceptual congruence model studies from information systems management research. The new survey instrument redefines service desk assessment by taking into consideration the perspectives of both service users and of service providers, to help service providers gain a more robust sense of service quality. Introduction For decades, libraries have been studying how people use and are satisfied with their use of library facilities, collections, tools, and personnel. Research methodologies are varied, including user satisfaction surveys, focus group meetings, unobtrusive evalu- ations,1 and ethnographic studies.2 As no one methodology can fulfill all of a library’s assessment needs, employing a variety of assessment tools is essential to gaining the fullest understanding of user needs, expectations, and perceptions, especially given the interconnectedness of library services, collections, and spaces. As a primary service point, the reference desk has been the focus of evaluation studies since the 1970s.3 Several early service desk quality studies suggest users are satisfied with their experiences as long as the experience itself is pleasant,4 yet these same studies reflect on service provider concerns that correct and full answers are not provided during reference transactions. When these studies are done at institutions where mixed staffing models are used at reference desks, the concerns about answer accuracy are even more emphasized.5 The question is further complicated when one considers current trends in reference librarianship—that fewer and fewer queries are being put to the desk but that those queries are increasingly complicated.6 Further, more and more libraries are experimenting with combined service desks, including collaborations with information technology desks7 as a result of internal and external pressures such as increased demands on librarian time, decreased budgets, and a need to diversify services to meet the changing needs of users. Between fall 2012 and spring 2015, the Help@Ames Desk at Illinois Wesleyan Uni- versity’s Ames Library was the first stop for information, research, and technology * Crystal Boyce is Assistant Professor in the Ames Library at Illinois Wesleyan University; e-mail: cboyce@ iwu.edu. ©2017 Crystal Boyce, Attribution-NonCommercial (http://creativecommons.org/licenses/by- nc/4.0/) CC BY-NC. doi:10.5860/crl.78.3.359 mailto:cboyce@iwu.edu mailto:cboyce@iwu.edu http://10.5860/crl 360 College & Research Libraries March 2017 support for IWU faculty, staff, and students. Illinois Wesleyan University is a small, private, liberal arts university in central Illinois. As a teaching-focused institution, the faculty to student ratio is relatively small, 11:1. The Help@Ames Desk, located on the entry level of the library, was staffed entirely by student assistants who were given extensive training on both library and technology support. A representative from the library (the author) and a representative from Information Technology Services (ITS) cosupervised the roughly 30 student employees staffing the desk. Library faculty and IT staff were available during business hours, with evening and weekend transactions either handled exclusively by students or queued until the next business day. As with any service, there were numerous stakeholder groups, divided here between service providers and service users. In our case, service providers included Help@ Ames student employees (n = 30), library faculty (n = 8), and ITS staff (n = 15). Service users included anyone who contacted the desk for help, the majority of whom were IWU faculty (n = 185), staff, and students (n = 1,900). Parents, community members, and alumni were also considered service users. Library and ITS administration often fielded questions about how effectively in- quiries at the Help@Ames desk were handled by student employees. Concerns often centered on whether student employees handled queries expeditiously and whether questions were referred appropriately. An initial review of desk statistics in fall 2014 indicated that, as expected, there were interactions at the desk that could have benefited from the expertise of library faculty or ITS staff. Additionally, a sense of dissatisfaction had been expressed by campus office staff regarding ITS help (questions taking too long to be answered, not getting help from experts fast enough, and the like). Finally, the supervisors of the Help@Ames Desk often received feedback from other service providers regarding the strengths and weaknesses of individual student employees or of the student employees as a whole. The supervisors wondered whether their own expectations for student employees differed from the expectations of other service providers. However, without data by which to judge these various anecdotes, there was no way of knowing whether views were representative of entire populations or simply the views of outspoken individuals. This project sought to establish a baseline for understanding user satisfaction with Help@Ames Desk services. The primary research questions were: 1. Do service providers have different expectations and perceptions of the Help@ Ames Desk from service users? 2. Do IWU faculty, staff, and students expect a different level of services from what they perceive to be delivered by the Help@Ames Desk? In other words, is there a gap between service user expectations and perceptions of services provided by the Help@Ames Desk? 3. Do Help@Ames student employees, librarians, and ITS staff expect a different level of services from what they perceive to be delivered? Put another way, is there a gap between service provider expectations and perceptions of services provided by the Help@Ames Desk? While this project had immediate value in the management of the Help@Ames Desk, the results will also be of value to the professional community as more libraries experiment with service models, including mergers with Information Technology and combined reference and circulation desks. In an environment of shrinking budgets, staffing concerns, and changing student bodies, academic libraries must be prepared to meet the changing needs of users while continuing to provide research support services; how we do that is based heavily on local culture and resources. This study offers one methodology by which to assess user satisfaction with library services. Measuring Perceptual (In)Congruence 361 Literature Review When considering the evaluation of mixed service desks, an ideal approach consid- ers how those services have been evaluated individually then combines assessment methods cohesively. EDUCAUSE is a nonprofit association whose mission is to ad- vance higher education through the use of information technology.8 The EDUCAUSE Center for Analysis and Research (ECAR) has conducted studies of higher education stakeholder groups and their use of information technologies,9 including discussions of metrics for assessing user satisfaction, for the past decade. Those studies mirror refer- ence desk evaluations in terms of user values and satisfaction ratings.10 Interestingly, corporate information technology literature similarly suggests that the experience a user has is of greater importance to the user than the answer received.11 LibQUAL+ and TechQual+ were developed as assessment tools to evaluate the entirety of library and IT service offerings based on gap measurements.12 These gap measures are a “function of differences in expectation and performance reported by stakeholders”13 where service quality is measured across several dimensions. “Service quality for each dimension is captured by a gap score (G), indicating perceived qual- ity for a given item,” where the gap score (G) is the difference between the perceived level of performance (P) and the expected level of performance (E).14 Both are a suite of services that libraries and IT providers in higher education “use to solicit, track, understand, and act upon users’ opinions of service quality.”15 Various studies confirmed the validity and reliability of LibQUAL+ as an assessment tool,16 and, taken with other assessment methods, libraries are often able to gather enough information about user need to make informed decisions. However, a gap in the literature exists in that most assessment tools used today are one-sided. In other words, many studies focus on how users evaluate services, but they don’t take into consideration how service providers evaluate those same services. Borrowed from social psychology, the concept of congruence between individuals or groups “refers to the fit, match, agreement, or similarity between two conceptually distinct constructs”17 and has been used in behavior research.18 A high degree of perceptual congruence implies a significant degree of matching between stakeholders, while a low degree of perceptual congruence indicates large differences in expectations and/or perceptions. Corporate information technology researchers have combined incongruence studies with the principles of gap measurements by measuring the expectations and percep- tions of both service users and service providers.19 Having differing expectations of service quality between users and providers can be quite costly, as “disagreement on… service quality among…professionals and users has been found to be tied to lower user satisfaction”20 with the service in question. The following study describes a survey methodology that blends the gap measurement models of LibQUAL+ and TechQual+ and continues to apply the concept of congruence studies to service provision by mea- suring the expectations and perceptions of both service providers and service users. Methodology To measure service provider and service user satisfaction with services provided at the Help@Ames Desk, two complementary surveys were designed using Qualtrics. One survey was written for and distributed to service users (IWU faculty, staff, and students), while the other was written for and distributed to service providers (Help@Ames student employees, library faculty, and ITS staff). Survey questions were designed by comparing the core survey instruments for LibQUAL+ and TechQual+ and identifying those most relevant to an assessment of an individual service desk with both library and ITS responsibilities. Both the LibQUAL+ and TechQual+ surveys had previously been administered at IWU, in 2004 and 2009 respectively, from which a list of each survey’s 362 College & Research Libraries March 2017 full question complement was assembled (see appendices A and B). Overlapping ques- tions were combined, while questions not specific to the scope of the Help@Ames Desk were discarded. As an example, most “Library as Place” core questions from LibQUAL+ would have been inappropriate for a survey focused on service from a single desk and were therefore not included. From TechQual+, responses related to “Connectivity and Access” would have been outside the purview of Help@Ames supervisors, so they were also not included. In this way, the survey instrument was customized based on the services offered by the Help@Ames desk. This study drew heavily from the “Affect of Service” category in LibQUAL+ and the “Support and Training” category of TechQual+. The questions on the two surveys addressed the same concepts, with appropri- ate changes in language to address the different perspectives of the two stakeholder groups. For example, a question to users was phrased, “When it comes to Help@Ames student employees who instill confidence in me…,” while for providers it was worded, “When it comes to Help@Ames student employees who instill confidence in users…” See appendix C for a full list of survey questions. In a traditional LibQUAL+ survey, participants are presented with a series of state- ments; for each statement, participants rate those statements across three factors on a Likert-type scale from 1 to 9, with 1 being the lowest rating and 9 being the highest. Those factors are a participant’s “minimum level of service that is deemed acceptable, the perceived level of service seen as being offered, [and] the desired level of service.”21 This study, instead, asked participants to rate statements across two factors (Ex- pected/Desired Service Level and Perceived Service Performance), on a Likert-type scale from 1 to 7, with 1 being the lowest and 7 being the highest. The Expected/Desired Service Level (E) is a number that represents the level of service participants think should be provided. Lower expectations for this service would typically be closer to the lower end of the rating scale; higher expectations would typically be closer to the higher end of the rating scale. The Perceived Service Performance (P) is a number that represents the level of service that participants believe is typically provided at the Help@Ames Desk. This rating is typically considered within the context of the expected/desired ratings. If participants feel that the perceived performance is below expected service expectations, ratings should be equal to or below expected service level ratings. If the perceived performance exceeds desired service expectations, ratings should be equal to or greater than expected/desired service level ratings. The survey for service providers used the same scale as the survey for service users. Service users were presented with 17 statements, while service providers were presented with 14 statements. In addition to the core statements, each survey participant was asked to offer posi- tive, critical, and general feedback regarding their experiences with or perceptions of Help@Ames student employees. Finally, demographic information was collected, for the purposes of assessing the data set and identifying trends across variables. Responses for service users’ affiliation with the university (see table 1) indicates an approximately 11:1 faculty to student response rate, which was deemed appropriate since Illinois Wesleyan claims to main- tain an 11:1 faculty to student ratio. Students and faculty had an overall response rate of 15 percent and 14 percent respectively. Service users were asked to identify how often they contacted the Help@Ames Desk (see table 2) and for what purposes they typically contacted the desk (see table 3). The reasons for contacting the desk lined up with expectations based on desk statistics, giving further value to the data set. Service providers were asked to identify whether they were Help@Ames student employees, library faculty, or IT staff (n = 25, 5, and 7 respectively). For each survey, participants were directed to a secondary Google form, wherein they entered their name and e-mail should they wish to be considered for the survey incentive. Measuring Perceptual (In)Congruence 363 TABLE 2 Service Users’ Indication of Frequency of Use of the Help@Ames Desk How often do you contact/use the Help@Ames Desk? Responses Percentage I rarely or never use the Help@Ames Desk 79 23% I use the Help@Ames Desk once or twice a semester 190 55% I use the Help@Ames Desk about once per month 65 19% I use the Help@Ames Desk about once per week 9 3% I use the Help@Ames Desk about once per day 0 0% Total 343 100% TABLE 1 Service User (IWU Faculty, Staff, and Students) Affiliation with the University Affiliation Responses Student, First Year 76 Student, Second Year 83 Student, Third Year 69 Student, Fourth Year 60 Student, Other 1 Total Students 289 Adjunct Faculty, Instructor, Lecturer 5 Assistant Professor 7 Associate Professor 6 Full Professor 8 Total Faculty 26 Staff, Academic Affairs 3 Staff, Academic Departments 1 Staff, Admissions & Financial Aid 4 Staff, Alumni Services & Advancement 2 Staff, Business & Finance, HR 2 Staff, Health & Counseling Services 2 Staff, Office of Communication 0 Staff, Physical Plant 1 Staff, Student Affairs 6 Staff, Security 1 Staff, Other 6 Total Staff 28 Total Participants 343 364 College & Research Libraries March 2017 While both LibQUAL+ and TechQual+ recruit participants via random sampling, this study was distributed via an all-campus listserv and used a convenience sample. The subject line of the e-mail was “$10 for 10 minutes—Ames Library Survey”; the first 75 participants were eligible to receive a $10 gift card to Target. Service providers were e-mailed the link to their survey directly, with instructions to ignore the all-campus message. Similar to service users, service providers completing the survey were given a $20 gift certificate to Target. Service providers were e-mailed on a Monday morn- ing; the e-mail to campus (service users) was sent on a Tuesday afternoon. Follow up e-mails were sent to service providers twice, at weekly intervals. Survey responses were collected in Qualtrics, scrubbed in Excel, coded, and ana- lyzed using SPSS. The scrubbing process included deleting unnecessary variables from the data set (timestamps, IP addresses, start date, end date, and so on) and deleting incomplete responses. Of 419 surveys started by service users, 74 were incomplete and were deleted from the data set. Two additional submissions were deleted, as no data were recorded. Of 49 surveys started by service providers, 7 were incomplete and were deleted from the data set. Three additional submissions were deleted, as no data were recorded. Core statements were coded into variables (see appendix C), with 12 variables overlapping between service providers and service users. As an example, the question pertaining to whether Help@Ames student instilled confidence in users was coded SW- Confidence. Each variable, such as SWConfidence, had three conditions—one each for the expectation score, the perception score, and the gap score (which is the difference between the perception and the expectation score): SWConfidenceE, SWConfidenceP, and SWConfidenceG. The E scores refer to the Expected/Desired Service Level, and the P score refers to the Perceived Service Performance. The G score refers to the gap between TABLE 3 Service Users’ Indication of Help Sought from the Help@Ames Desk When contacting the Help@Ames Desk, what is the most common type of help that you are seeking? Responses Percentage Computer/Software Issues 158 47% Help with Copiers, Printers, Scanning, Faxing 120 36% Help Finding Library Materials 82 24% Problems with Wireless or Internet Connection 89 23% Pickup Supplies (Markers, Stapler, etc.) 69 21% Reserving a Group Study or Project Room 50 15% Logging a Call/Ticket for 3900 47 14% Getting in Touch with a Librarian 48 14% Problems Logging into Moodle or MyIWU 41 12% Looking for Directions 29 9% Problems with Email 30 9% Getting Started with Research 31 9% Getting Advanced Research Help 19 6% Help with Campus or Personal Telephones 14 4% Total 336 100% Measuring Perceptual (In)Congruence 365 the P and E scores and was calculated by subtracting the E score from the P score. In other words, G = P – E, where a negative G score indicates service users expect more from the Help@Ames student employees on a given measure than they perceive they are receiving. Results Research Question 1: Do service providers have different expectations and perceptions of the Help@Ames Desk from service users? Independent samples t-tests were run in SPSS to compare service user expectation, perception, and gap measures to service provider expectation, perception, and gap measures for each overlapping variable. Twelve overlapping variables means there were 36 conditions; 11 out of 36 conditions had a statistically significant difference in responses (see table 5 for significant results and appendix D for a complete list of descriptive statistics). Six of these conditions were Expectation (E) or Perception (P) conditions. • SWConfidenceP: Help@Ames student employees who instill confidence in me • SWReadyP: Help@Ames student employees who are ready to respond to my questions • SWWillingE: Help@Ames student employees who are willing to help me • SWKnSkillsE: Help@Ames student employees who have the knowledge and skills to answer my questions • SWKnSkillsP: Help@Ames student employees who have the knowledge and skills to answer my questions • RefToLibP: When it comes to the ease and speed with which I am connected to a librarian Five conditions were Gap (G) conditions. • SWConfidenceG: Help@Ames student employees who instill confidence in me • SWReadyG: Help@Ames student employees who are ready to respond to my questions • SWUnderstandG: Help@Ames student employees who understand my needs • SWHandleG: Help@Ames student employees who dependably handle the problems I bring them • DeskG: When it comes to a comfortable and inviting desk When asked to consider whether there was a perception that Help@Ames student employees instilled confidence in users (SWConfidence), users had a mean perception of 5.21; providers had a mean perception of 4.72. The SWConfidence gap of service users was –0.48, indicating perceptions of performance fall below expectations of service. The SWConfidence gap of service providers was –1.15. When asked to consider the readiness of student employees to provide assistance (SWReady), users had a mean perception of 5.47, providers had a mean perception of 5.00. The SWReady gap of service users was –0.36; the SWReady gap of service providers was –1.36, indicating a considerable differ- TABLE 4 Coding of Variables across Three Conditions When it comes to Help@Ames student employees who instill confidence in… SWConfidence Expectation Score SWConfidenceE Perceived Performance Level SWConfidenceP Gap Score (P-E) SWConfidenceG 366 College & Research Libraries March 2017 ence between providers and users. When asked whether Help@Ames student employees understood user needs, service users had a SWUnderstand gap of –0.45; it was –1.00 for service providers. When asked to consider how willing student employees were to help (SWWilling), service users had a mean expectation of 5.97, while providers had a mean expectation of 6.33. This was the only case where providers ranked a variable higher than users. When asked to consider the knowledge and skills of student employees (SWKnSkills), service users had a mean expectation of 5.56 and a mean perception of 5.12. Service providers had a mean expectation of knowledge and skills of 5.05 and a mean perception of 4.63. When asked whether student employees dependably handled problems, the SWHandle gap of service users was –0.50, while the gap of service providers was –1.15. Reflecting on the atmosphere of the Desk, the gap of service users was –0.27, with a gap of –0.74 for service providers. Finally, when asked to reflect on the speed and ease of being connected to a librarian (ReftoLib), service users had a mean perception of 5.68, while service providers had a mean perception of 5.15. See figures 1 and 2 for a comparison of service provider and user expectations and perceptions, respectively. TABLE 5 Comparing E, P, and G Scores between Service Providers and Service Users Where Differences Are Statistically Significant (P < 0.05) Variable Factor Participant Type N Mean Sig. (2– tailed) SWConfidence P User 330 5.21 0.046 Provider 36 4.72 G User 343 –0.48 .053* Provider 39 –1.15 SWReady P User 328 5.47 0.044 Provider 35 5 G User 343 –0.36 0.005 Provider 39 –1.36 SWUnderstand G User 343 –0.45 0.025 Provider 39 –1 SWWilling E User 339 5.97 .057* Provider 39 6.33 SWKnSkills E User 333 5.56 0.018 Provider 39 5.05 P User 326 5.12 0.046 Provider 35 4.63 SWHandle G User 343 –0.5 0.019 Provider 39 –1.15 Desk G User 343 –0.27 0.029 Provider 39 –0.74 RefToLib P User 282 5.68 0.028 Provider 33 5.15 *Difference between service providers and service users is approaching significance. Measuring Perceptual (In)Congruence 367 Research Question 2: Is there a gap between service user expectations and perceptions of service provided by the Help@Ames student employees? Descriptive statistics were calculated using SPSS to determine if there was a gap in service quality as reported by service users. In addition to the 12 variables that were also asked of service providers, users were asked to reflect on communications from the Help@Ames desk (Communication), self-help and training materials for library and technology resources made available online (SelfHelpLib, SelfHelpTech), and the availability of online help with library and technology resources (SWOnLibRes, SWOnTechRes). The distinction between library and technology self-help variables is that SelfHelpLib measured asynchronous assistance (tutorials, online explanations), while SWOnLibRes measures synchronous assistance (chat). Chat, as a communication method with users, was measured specifically because anecdotal evidence suggested users avoid chat due to low satisfaction with interactions. FIGURE 1 Summary of Service Provider and Service User Expectations across Core Questions (SWWillingE and SWKnSkillsE Were Statistically Different) 368 College & Research Libraries March 2017 The above methodology was able to determine across which core statements there were gaps in expectations and perceptions of service from the service user perspective. For all variables, service users perceived lower service quality than they expected to receive. A gap score greater than +0.25 or less than –0.25 is considered significant by LibQUAL+.22 The gap between perception and expectation for each variable was less than –0.25, indicating a significant difference in expectations and perceptions for service users. The largest gap was related to speed and ease with which a user was connected to an ITS staff member (RefToIT), where the gap was –0.94. Research Question 3: Is there a gap between service provider expectations and per- ceptions of service provided by the Help@Ames student employees? Descriptive statistics were calculated using SPSS to determine if there was a gap in service quality as reported by service providers. Service providers were asked to reflect on the speed and ease with which users were connected to ITS staff, but it was coded as a separate variable (SpeedRefIT). While service users were asked to reflect FIGURE 2 Summary of Service Provider and Service User Perceptions across Core Questions (SWConfidenceP, SWReadyP, SWKnSKillsP, and RefToLibP Were Statistically Different) Measuring Perceptual (In)Congruence 369 on the ease and speed with which they were connected to a librarian (RefToLib) and to ITS staff (RefToIT), those same variables, when asked of providers, related to a stu- dent employee’s ability to recognize a question that should be referred to an expert. Additionally, providers were asked to reflect on online help from student employees concurrently with self-help resources for both library and technology issues in a single variable (SHLibTech). FIGURE 3 Expectations Compared to Perceptions of Service Users across Core Variables FIGURE 4 Expectations Compared to Perceptions of Service Providers across Core Variables 370 College & Research Libraries March 2017 Similar to service users, providers perceived lower service quality than they expected to be offered. However, the gap measurements for service providers were often twice that of service users. In other words, while service users perceive they are receiving service of a lower quality than expected, there is even more of a discrepancy in service quality expectations and perceptions from service providers. The only variable on which users had a larger service quality gap than users was the ease and speed with which users were connected to ITS staff (RefToIT). However, the wording of the question did not allow for a direct comparison between providers and users, as service providers reflected on the speed and ease with which users are connected to an ITS staff member through the SpeedRefIT variable. Figure 4 reflects the appropriate comparison. Discussion By building on the strengths of LibQUAL+ and shifting the focus to a single service point, this study sought to combine research methodologies from academic library, higher education information technology, and corporate information technology lit- erature to determine if service users and providers of a joint reference and information technology support desk in a small academic library expected and/or perceived service quality differently. Managing and being aware of discrepancies in expectations between service users and providers is a critical step in assessing library services. If service providers had significantly different expectations or perceptions from service users (in other words, if the two groups were perceptually incongruent), it would be difficult to design and deliver services satisfactorily. Put another way, we have to know what our service users value and how they define satisfactory service to deliver those services. Research Question 1 Addressing the first research question, statistical analyses in this study demonstrated that service users and service providers have approximately the same expectations and perceptions of the Help@Ames Desk, since only six conditions had statistically different measures. When comparing service users and providers, gap scores and expectation/perception scores must be considered separately. For example, when considering the SWConfi- dence mean gap score of users (–0.48) and the mean gap score of providers (–1.15), we can only understand that there was a much larger gap between the expected level of service quality and the perceived level of performance for service providers. What this doesn’t indicate is how high the expectations were or how low the perceived performance was in the first place. Gap measures are especially meaningful when considering trends within a population, such as gap trends for user or gap trends for service providers. The significance, or lack thereof, between expectations/perceptions of users and providers gives a much better picture of the two groups’ congruence. That SWConfidence expectation (E) measures were not statistically different between users and providers suggests perceptual congruence. That the mean perceptions of service performance related to confidence (SWConfidenceP) were statistically different (users = 5.21, providers = 4.72) indicates that users feel better about this measure of service quality than do service providers. With 12 overlapping variables, there were a pos- sible 24 Expectation and Perception conditions and that only 6 of the 24 Expectation and Perception conditions had statistically different mean ratings may suggest that service users and service providers are approaching overall perceptual congruence. Further, in five of those six conditions, service providers ranked services more harshly than did service users. To put it another way, service users were more satisfied with services from Help@Ames student employees along the dimensions of confidence Measuring Perceptual (In)Congruence 371 (SWConfidenceP), readiness to help (SWReadyP), the knowledge and skills necessary to perform their jobs (SWKnSkillsP), and referrals to library faculty (RefToLibP) than service providers were. Although the comparisons between service users and providers across other variables were not significant, there was a trend where service providers had higher expectations and lower perceptions of service quality than did service users. This trend was not surprising, as we are often our own worst critics. In seeking to establish a baseline for understanding user satisfaction with Help@ Ames desk services and student employees, comparing the expectations and percep- tion of service users and providers across several variables suggests that users and providers are evaluating services similarly. Moving forward, these results will provide a frame of reference through which library and ITS administrators may consider service quality concerns and complaints, since we can be reasonably assured that providers are operating under comparable service quality values as users. Research Questions 2 and 3 Of equal importance, the study methodology allowed for comparison within user and provider groups. In the case of both users and providers, service quality was perceived at a lower level than was expected. Analyses were not conducted to determine if a P condition score was statistically different than the E condition score, as these analyses were not done for LibQUAL+ or TechQual+. While disappointing that perceptions were always lower than expectations, service users’ gaps between expectations and perceptions of service outcomes were consistent, whereas service providers’ gaps were erratic. Figure 4 suggests service providers may place greater importance on service outcomes like readiness or confidence, but the data do not offer explanations of why. Recent literature suggests a methodology for conducting analyses of the qualitative questions included in LibQUAL+ surveys,23 which this study collected through the final three feedback questions on each survey. An analysis of those questions may lead to a better understanding of why users and providers rated service outcomes differently. FIGURE 5 Comparison of Service Quality Gaps Between Service Users and Service Providers 372 College & Research Libraries March 2017 The results of this study will be used to help service providers understand how their expectations and perceptions of service quality compare to those of service users. If service providers can come to understand that service users are, for the most part, satisfied with Help@Ames Desk services, perhaps the discourse can shift away from one where student employee mistakes are highlighted and toward one where greater collaboration is emphasized between service providers. Further, this study offers an additional assessment tool by which libraries can evaluate service points. While not providing a means by which to assess services holistically, when used as a tool in a suite of assessment measures, libraries may come to understand a different side of user experiences by learning to look critically at the assumptions we bring to the service desk and how those assumptions shape service delivery. Conclusion and Further Research As libraries experiment with public service desk design and staffing, new methods of assessing the services provided by those desks are necessary. While a merged service desk design will not be feasible at every institution, the survey tool described here is flexible enough to be adapted for local needs. The assessment of the Help@Ames student employees based on the expectations and perceptions of both service providers and service users in this research can be considered a success and one step toward meet- ing the research and information technology support needs of the Illinois Wesleyan University community. A previous study of both the Help@Ames and Circulation students revealed that IWU faculty, staff, and students are generally satisfied or very satisfied with services offered at each desk.24 Together with this study, student employee supervisors can identify areas in which to improve training, as well as communicate with other service providers what service users expect from, and how they perceive the performance of, student employees at the Help@Ames Desk. Three avenues of further research are possible. An ancillary data set can be ex- tracted wherein service outcomes may also be analyzed when considering university affiliation. In other words, service providers could be broken into student providers, faculty providers, and staff providers. Service users could be broken into student users, faculty users, and staff users. It might be possible to determine if there are significant differences in responses between Help@Ames student employees and library faculty and ITS staff, thus lending depth to the analysis. Moreover, data were also collected to divide students and faculty into distinct subcategories, like first-, second-, third-, and fourth-year students. Additional research might compare first-year student users to fourth-year student users to see if expectations and perceptions change over the course of time. Further, while this study suggests that service users and providers are approaching perceptual congruence, an interesting study might investigate the effect of perceptual congruence on service outcomes from an ethnographic point of view. Such a study could look at service users and providers as two populations, or divide users and providers by university affiliation, asking how the differences in expectations affect perceptions of service. Finally, repeating this assessment over time and at other institutions of similar size or mission, or at service desks with similar staffing models, could reveal interesting patterns of service user expectations. Acknowledgements This study was supported by an Artistic & Scholarly Development Grant from Illinois Wesleyan University. Measuring Perceptual (In)Congruence 373 Appendix A. 2004 Illinois Wesleyan University LibQUAL+ Core Survey Instrument Affect of Service When it comes to… 1. Employees who instill confidence in users. 2. Giving users individual attention. 3. Employees who are consistently courteous. 4. Readiness to respond to users’ questions. 5. Employees who have the knowledge to answer user questions. 6. Employees who deal with users in a caring fashion. 7. Employees who understand the needs of their users. 8. Willingness to help users. 9. Dependability in handling users’ service problems. Information Control When it comes to… 10. Making electronic resources accessible from my home or office. 11. A library website enabling me to locate information on my own. 12. The printed library materials I need for my work. 13. The electronic information resources I need. 14. Modern equipment that lets me easily access needed information. 15. Easy-to-use access tools that allow me to find things on my own. 16. Making information easily accessible for independent use. 17. Print and/or electronic journal collections I require for my work. Library as Place When it comes to… 18. Library space that inspires study and learning. 19. Quiet space for individual activities. 20. A comfortable and inviting location. 21. A gateway for study, learning, or research. 22. Community space for group learning and group study. 374 College & Research Libraries March 2017 Appendix B. 2015 Higher Education TechQual+ Core Survey Instrument Connectivity and Access Tell us about your ability to access technology services through the Internet When it comes to… 1. Having an Internet service that operates reliably. 2. Having an Internet service that provides adequate capacity or speed. 3. Having an Internet service that provides adequate Wi-Fi coverage. 4. Having adequate cellular (or mobile) coverage throughout campus. Technology and Collaboration Services Tell us about the quality of Web sites, online services, and technologies for collaboration When it comes to… 5. Having Web sites and online services that are easy to use. 6. Having online services that enhance the teaching and learning experience. 7. Having technology services that allow me to collaborate effectively with others. 8. Having systems that provide timely access to data that informs decision-making. 9. The availability of classrooms or meeting spaces with technology that enhances the teaching and learning experience. Support and Training Tell us about your experiences with those supporting your use of technology services When it comes to… 10. Getting timely resolution of technology problems that I am experiencing. 11. Technology support staff who have the knowledge to answer my questions. 12. Receiving communications regarding technology services that I can understand. 13. Getting access to training or other self-help information that increases my effec- tiveness with technology. Copyright 2015 Timothy M. Chester, All Rights Reserved. Use or administration of this survey or survey items is prohibited unless administered through the Higher Education TechQual+ Project Web site at http://www.techqual.org http://www.techqual.org Measuring Perceptual (In)Congruence 375 APPENDIX C Survey Questions and Coded Variable, as Distributed to Service Users and Service Providers Service User Survey Questions Variable Name* Service Provider Survey Questions When it comes to Help@Ames student employees who instill confidence in me… SWConfidence When it comes to Help@Ames student employees who instill confidence in users… When it comes to Help@Ames student employees who give me individual attention… SWAttention When it comes to Help@Ames student employees who give users individual attention When it comes to Help@Ames student employees who are ready to respond to my questions… SWReady When it comes to Help@Ames student employees who are ready to respond to user questions…. When it comes to Help@Ames student employees who have the knowledge and skills to answer my questions… SWKnSkills When it comes to Help@Ames student employees who have the knowledge and skills to answer user questions… When it comes to the ease and speed with which I am connected to a librarian… RefToLib When it comes to questions that should be referred to a librarian… When it comes to the ease and speed with which I am connected to an ITS staff member RefToIT When it comes to questions that should be referred to an ITS staff member… SpeedRefIT** When it comes to the ease and speed with which users are connected to ITS staff… When it comes to Help@Ames student employees who work with me in a caring fashion… SWCaring When it comes to Help@Ames student employees who work with users in a caring fashion… When it comes to Help@Ames student employees who understand my needs… SWUnderstand When it comes to Help@ Ames student employees who understand user needs… When it comes to Help@Ames student employees who are willing to help me… SWWilling When it comes to Help@Ames student employees who are willing to help users… When it comes to Help@ Ames student employees who dependably handle the problems I bring them… SWHandle When it comes to Help@ Ames student employees who dependably handle user problems… When it comes to a comfortable and inviting desk… Desk When it comes to a comfortable and inviting desk… When it comes to getting timely resolution of problems that I am experiencing… Resolution When it comes to Help@Ames student employees providing timely resolution of problems users are experiencing… 376 College & Research Libraries March 2017 APPENDIX C Survey Questions and Coded Variable, as Distributed to Service Users and Service Providers Service User Survey Questions Variable Name* Service Provider Survey Questions When it comes to the availability of online help with using library resources… SWOnLibRes** When it comes to getting access to training or self-help information that increases my effectiveness with library resources… SelfHelpLib** When it comes to the availability of online help with using technology resources… SWOnTech Res** When it comes to getting access to training or self-help information that increases my effectiveness with technology resources… SelfHelpTech** SHLibTech** When it comes to Help@Ames student employees sharing training or other self-help information that increases users effectiveness with the library and technology… When it comes to receiving communication that I can understand… Communication** Please tell us about a positive experience you had with the Help@Ames Desk. Please tell us something you feel the Help@Ames student employees do well. Please tell us about an experience you had at the Help@Ames Desk that you prefer had gone differently. Please tell us something you feel the Help@Ames student employees need to improve upon. Please provide any feedback you would like us to have regarding the Help@Ames Desk. Please provide any additional feedback you would like us to have regarding the Help@Ames Desk. *Each variable has three measures – E (expectation), P (perception), and G (the gap equal to the perception score minus the expectation score). **Question was asked of either service users or service providers, but not both. Measuring Perceptual (In)Congruence 377 APPENDIX D Descriptive Statistics Comparing E, P, And G Scores between Service Providers and Service Users for All Variables Variable Factor Participant Type N Mean Std. Deviation Std. Error Mean Sig. (2– tailed) SWConfidence* E User 335 5.62 1.251 .068 .628 Provider 39 5.51 1.520 .243 P User 330 5.21 1.375 .076 .046 Provider 36 4.72 1.386 .231 G User 343 –.48 1.492 .081 .053** Provider 39 –1.15 2.059 .330 SWAttention E User 335 5.74 1.195 .065 .068 Provider 39 6.10 1.021 .163 P User 329 5.45 1.381 .076 .225 Provider 35 5.74 1.245 .210 G User 343 –.38 1.287 .070 .078 Provider 39 –.95 1.919 .307 SWReady* E User 333 5.76 1.177 .065 .675 Provider 39 5.85 1.136 .182 P User 328 5.47 1.285 .071 .044 Provider 35 5.00 1.475 .249 G User 343 –.36 1.422 .077 .005 Provider 39 –1.36 2.032 .325 SWCaring E User 334 5.84 1.187 .065 .230 Provider 38 6.08 .997 .162 P User 327 5.61 1.238 .068 .620 Provider 36 5.72 1.186 .198 G User 343 –.33 1.381 .075 .374 Provider 39 –.64 2.096 .336 SWUnderstand* E User 333 5.71 1.146 .063 .975 Provider 39 5.72 1.395 .223 P User 327 5.35 1.248 .069 .283 Provider 36 5.11 1.190 .198 G User 343 –.45 1.401 .076 .025 Provider 39 –1.00 1.821 .292 SWWilling* E User 339 5.97 1.113 .060 .057** Provider 39 6.33 1.108 .177 P User 336 5.78 1.187 .065 .137 Provider 36 6.08 1.025 .171 G User 343 –.24 1.206 .065 .165 Provider 39 –.72 2.051 .328 378 College & Research Libraries March 2017 APPENDIX D Descriptive Statistics Comparing E, P, And G Scores between Service Providers and Service Users for All Variables Variable Factor Participant Type N Mean Std. Deviation Std. Error Mean Sig. (2– tailed) SWKnSkills* E User 333 5.56 1.256 .069 .018 Provider 39 5.05 1.376 .220 P User 326 5.12 1.362 .075 .046 Provider 35 4.63 1.592 .269 G User 343 –.53 1.634 .088 .191 Provider 39 –.90 1.861 .298 SWHandle* E User 329 5.74 1.204 .066 .821 Provider 39 5.69 1.239 .198 P User 321 5.35 1.298 .072 .200 Provider 35 5.06 1.211 .205 G User 343 –.50 1.609 .087 .019 Provider 39 –1.15 1.967 .315 Desk* E User 337 5.99 1.152 .063 .470 Provider 39 6.13 1.080 .173 P User 336 5.73 1.304 .071 .802 Provider 37 5.68 1.270 .209 G User 343 –.27 1.249 .067 .029 Provider 39 –.74 1.585 .254 Resolution E User 333 5.83 1.147 .063 .220 Provider 39 5.59 1.312 .210 P User 325 5.32 1.400 .078 .146 Provider 37 4.97 1.258 .207 G User 343 –.62 1.680 .091 .369 Provider 39 –.87 1.576 .252 RefToLib* E User 304 5.77 1.212 .070 .164 Provider 36 5.47 1.320 .220 P User 282 5.68 1.287 .077 .028 Provider 33 5.15 1.439 .250 G User 343 –.45 2.035 .110 .475 Provider 39 –.69 2.067 .331 RefToIT E User 301 5.85 1.172 .068 .930 Provider 38 5.87 1.319 .214 P User 274 5.26 1.475 .089 .091 Provider 36 5.69 1.327 .221 G User 343 –.94 2.099 .113 .176 Measuring Perceptual (In)Congruence 379 APPENDIX D Descriptive Statistics Comparing E, P, And G Scores between Service Providers and Service Users for All Variables Variable Factor Participant Type N Mean Std. Deviation Std. Error Mean Sig. (2– tailed) Provider 39 –.46 1.790 .287 Communication E User 336 5.93 1.093 .060 N/A Provider N/A N/A N/A N/A P User 330 5.69 1.193 .06 N/A Provider N/A N/A N/A N/A G User 343 –.33 1.315 .071 N/A Provider N/A N/A N/A N/A SWOnLibRes E User 321 5.84 1.192 .067 N/A Provider N/A N/A N/A N/A P User 308 5.54 1.332 .076 N/A Provider N/A N/A N/A N/A G User 343 –.49 1.598 .086 N/A Provider N/A N/A N/A N/A SWOnTechRes E User 316 5.87 1.192 .067 N/A Provider N/A N/A N/A N/A P User 300 5.41 1.350 .078 N/A Provider N/A N/A N/A N/A G User 343 –.67 1.843 .100 N/A Provider N/A N/A N/A N/A SelfHelpLib E User 302 5.78 1.178 .068 N/A Provider N/A N/A N/A N/A P User 283 5.41 1.350 .080 N/A Provider N/A N/A N/A N/A G User 343 –.63 1.840 .099 N/A Provider N/A N/A N/A N/A SelfHelpTech E User 307 5.76 1.247 .071 N/A Provider N/A N/A N/A N/A P User 292 5.41 1.368 .080 N/A Provider N/A N/A N/A N/A G User 343 –.55 1.773 .096 N/A Provider N/A N/A N/A N/A SHLibTech E User N/A N/A N/A N/A N/A Provider 37 5.54 .900 .148 P User N/A N/A N/A N/A N/A Provider 34 5.09 .900 .148 380 College & Research Libraries March 2017 APPENDIX D Descriptive Statistics Comparing E, P, And G Scores between Service Providers and Service Users for All Variables Variable Factor Participant Type N Mean Std. Deviation Std. Error Mean Sig. (2– tailed) G User N/A N/A N/A N/A N/A Provider 39 –.82 1.876 .300 SpeedRefIT E User N/A N/A N/A N/A N/A Provider 37 5.70 1.288 .212 P User N/A N/A N/A N/A N/A Provider 35 4.77 1.477 .250 G User N/A N/A N/A N/A N/A Provider 39 –1.13 1.838 .294 *Shaded cells indicate that there is a statistically significant difference between service user and service provider responses (P<0.05). **Difference between service providers and service users is approaching significance. Notes 1. Candice Benjes-Small and Elizabeth Kocevar-Weidinger, “Secrets to Successful Mystery Shopping: A Case Study,” College & Research Libraries News 72, no. 5 (May 2011): 274–87. 2. Lynda Duke and Andrew Asher, College Libraries and Student Culture: What We Now Know (Chicago: American Library Association, 2012). 3. Robert Burns, Jr., “A Survey of User Attitudes toward Selected Services Offered by the Colorado State University Libraries,” Fort Collins Libraries, Colorado State University (1973); Wyma Jane Hood and Monte James Gittings, “Evaluation of Service at the General Reference Desk, University of Oregon Library,” Oregon University, Eugene School of Librarianship (1975). 4. Carolyn Jardine, “Maybe the 55% Rule Doesn’t Tell the Whole Story: A User-Satisfaction Survey,” College & Research Libraries 56, no. 6 (Nov. 1995): 477–86. 5. Julie Banks and Carl Pracht, “Reference Desk Staffing Trends: A Survey,” Reference & User Services Quarterly 48, no. 1 (2008): 54–59; Laura Boyer and William Theimer, Jr., “The Use and Training of Nonprofessional Personnel at Reference Desks in Selected College and University Libraries,” College & Research Libraries 36, no. 3 (May 1975): 193–200; Andrea Stanfield and Rus- sell Palmer, “Peer-ing into the Information Commons: Making the Most of Student Assistants in New Library Spaces,” Reference Services Review 38, no. 4 (2011): 634–46; Beth Woodard, “The Effectiveness of an Information Desk Staffed by Graduate Students and Nonprofessionals,” Col- lege & Research Libraries 50, no. 4 (July 1989): 455–67. 6. Rachel Applegate, “Whose Decline? Which Academic Libraries Are “Deserted” in Terms of Reference Transactions?” Reference & User Services Quarterly 48, no. 2 (2008): 176–89. 7. Donald Beagle, “The Emergent Information Commons: Philosophy, Models and 21st Century Learning Paradigms,” Journal of Library Administration 52, no. 6 (2012): 518–37; Stacey Kimmel-Smith, “Ten Years After: The Integrated Computing and Library Help Desk at Lehigh University,” Internet Reference Services Quarterly 11, no. 3 (2006): 35–55. 8. www.educause.edu/. 9. www.educause.edu/ecar. 10. Ian Hall, Jessica Stephens, and Sarah Kennedy, “Can You Measure IT? The UK Experience of TechQual+,” Performance Measurement & Metrics 15, no. 1/2 (2014): 32–40; Gail Salaway, Judith Borreson Caruso, and Mark Nelson, “The ECAR Study of Undergraduate Students and Information Technology, 2007,” EDUCAUSE Center for Applied Research (2007), available online at https://net. educause.edu/ir/library/pdf/ers0706/rs/ERS0706w.pdf [accessed 2 February 2016]; Gail Salaway, Judith Borreson Caruso, and Mark Nelson, “The ECAR Study of Undergraduate Students and Information Technology, 2008,” EDUCAUSE Center for Applied Research (2008), available online at https://net.educause.edu/ir/library/pdf/ERS0808/RS/ERS0808w.pdf [accessed 2 February 2016]. 11. William Doll, Xiaodong Deng, T.S. Raghunathan, Gholamreza Torkzadeh, and Weidong http://www.educause.edu https://www.educause.edu/ecar https://net.educause.edu/ir/library/pdf/ers0706/rs/ERS0706w.pdf https://net.educause.edu/ir/library/pdf/ers0706/rs/ERS0706w.pdf https://net.educause.edu/ir/library/pdf/ERS0808/RS/ERS0808w.pdf Measuring Perceptual (In)Congruence 381 Xia, “The Meaning and Measurement of User Satisfaction: A Multigroup Invariance Analysis of the End-User Computing Satisfaction Instrument,” Journal of Management Information Systems 21, no. 1 (Summer 2004): 227–62; James McKeen, Tor Guimaraes, and James Wetherbe, “The Rela- tionship between User Participation and User Satisfaction: An Investigation of Four Contingency Factors,” MIS Quarterly 18, no. 4 (Dec. 1994): 427–51. 12. Timothy Chester, “History: Designed by Practitioners, Informed by Research, Built for Higher Education,” Higher Education TechQual+ Project: Assessment, Planning, and Continuous Im- provement Tools for IT Organizations in Higher Education, available online at https://www.techqual. org/docs/about.aspx [accessed 31 August 2015]; Bruce Thompson, Colleen Cook, and Fred Heath, “The LibQUAL+ Gap Measurement Model: The Bad, the Ugly, and the Good of Gap Measure- ment,” Performance Measurement and Metrics 1, no. 3 (2000): 165–78. 13. James Jiang, Gary Klein, Debbie Tesch, and Hong-Gee Chen, “Closing the User and Provider Service Quality Gap: A Method for Measuring Service Quality That Includes Both the User and IS Service Provider Perspectives,” Communications of the ACM 46, no. 2 (Feb. 2003): 72–76. 14. Jiang, Klein, Tesch, Chen, “Closing the User and Provider Service Quality Gap,” 73. 15. “General Information,” LibQUAL+: Charting Library Service Quality, available online at https://www.libqual.org/about/about_lq/general_info [accessed 31 August 2015]. 16. Colleen Cook, Fred Heath, and Bruce Thompson, “Score Norms for Improving Library Service Quality: A LibQUAL+ Study,” portal: Libraries and the Academy 2, no. 1 (Jan. 2002): 13–26; Fred Heath, Colleen Cook, Martha Kyrillidou, and Bruce Thompson, “ARL Index and Other Va- lidity Correlates of LibQUAL+ Scores,” portal: Libraries and the Academy 2, no. 1 (Jan. 2002): 27–42; Bruce Thompson, Colleen Cook, and Russel Thompson, “Reliability and Structure of LibQUAL+ Scores: Measuring Perceived Library Service Quality,” portal: Libraries and the Academy 2, no. 1 (Jan. 2002): 3–12. 17. Jeffrey Edwards, “The Study of Congruence in Organizational Behavior Research: Critique and a Proposed Alternative,” Organizational Behavior and Human Decision Processes 58 (1994): 51–100. 18. Irene Chew and Albert Teo, “Perceptual Differences between Recruiters and Students on the Importance of Applicant and Job Characteristics: A Research Note Based on Evidence from Singapore,” International Journal of Human Resource Management 4, no. 1 (Feb. 1993): 231–40; Mel Schnake, “Effects of Differences in Superior and Subordinate Perceptions of Superiors’ Commu- nication Practices,” Journal of Business Communication 27, no. 1 (Winter 1990): 37–50. 19. Alexander Benlian, “Effect Mechanisms of Perceptual Congruence between Information Systems Professionals and Users on Satisfaction with Service,” Journal of Management Information Systems 29, no. 4 (Spring 2013): 63–96; Jiang, Klein, Tesch, Chen, “Closing the User and Provider Service Quality Gap.” 20. Benlian, “Effect Mechanisms of Perceptual Congruence,” 66. 21. Thompson, Cook, Heath, “The LibQUAL+ Gap Measurement Model.” 22. Colleen Cook, Fred Heath, and Bruce Thompson, “LibQUAL+, Spring 2004 Survey: Illinois Wesleyan University.” 23. Brian Detlor and Kathryn Ball, “Getting More Value from the LibQUAL+ Survey: The Merits of Qualitative Analysis and Importance-Satisfaction Matrices in Assessing Library Patron Comments,” College & Research Libraries 76, no. 6 (Sept. 2015): 796–810. 24. Crystal Boyce, “Secret Shopping as User Experience Assessment Tool,” Public Services Quarterly 11, no. 4 (Autumn 2015). https://www.techqual.org/docs/about.aspx https://www.techqual.org/docs/about.aspx https://www.libqual.org/about/about_lq/general_info