959 Changes in Reference Question Complexity Following the Implementation of a Proactive Chat System: Implications for Practice Krisellen Maloney and Jan H. Kemp Krisellen Maloney is Vice President for Information Services and University Librarian at Rutgers, The State University of New Jersey, e-mail: krisellen.maloney@rutgers.edu; Jan H. Kemp is Assistant Dean for Public Services at the University of Texas at San Antonio; e-mail: Jan.Kemp@utsa.edu. ©2015 Krisellen Maloney and Jan H. Kemp, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/3.0/) CC BY-NC. There has been longstanding debate about whether the level of complex- ity of questions received at reference desks and via online chat services requires a librarian’s expertise. Continued decreases in the number and complexity of reference questions have all but ended the debate; many academic libraries no longer staff service points with professional librar- ians. However, convenient, proactive online chat services could reverse the trends. This paper provides results of a study of reference question complexity following implementation of a proactive chat service. The study reveals changes in the complexity of chat questions that may have implications for staffing online reference services. oincident with the rise of information literacy efforts and the simplification in the online environment, there has been a dramatic decrease in reference questions. According to ACRL’s Academic Library Statistics, the number of reference transactions in doctorate-granting institutions declined 49 percent between 2000 and 2012, and nearly all the questions that remain are directional, basic search questions and questions related to library operations.1 These point-of-need questions, regardless of the medium over which they are conducted, served multiple purposes. At the most basic level, they provided support for the mechanics of library research, helping users navigate multiple information silos and the physical organiza- tion of the library. Although, on the surface, support for these questions related to the mechanics of library research, they also provided an opportunity for broad research support. The librarian had sufficient knowledge of the curriculum, the publication pat- terns of disciplines and subjects, and thesis and topic development to provide broad research support. In the case of student researchers, the librarian had the opportunity to reach the student at a teachable moment, reinforcing course content, guiding the student in formulating a research topic that was practical and appropriate for the given assignment, and then providing guidance in evaluating sources. The components of the reference transaction—support for the mechanical aspects of library research and the broader research expertise—were so tightly interwoven that many librarians as- sumed that, if information literacy instruction were integrated into the curriculum, doi:10.5860/crl.76.7.959 crl15-725 960 College & Research Libraries November 2015 search mechanics were simplified, and the physical barriers to access were removed in the online environment, the reference transaction would no longer be required.2 The decrease in questions related to physical and mechanical barriers is understand- able, but what has happened to the more complex questions, the questions requiring broader research expertise that previously had been an integral part of the reference transaction? This question is especially important as it relates to students who are learn- ing the academic research process. Professors note that, even with the abundance of information available and the development of information literacy programs, students seem to have more difficulty formulating research topics and finding appropriate sources.3 Is it possible that the advice and support that were provided at key points in the research process had an important role in student learning beyond that of the mechanics of the search? At the University Libraries, we believed students and faculty still had important, advanced reference questions, even though the number of reference questions had declined. As a result, we implemented a new chat system that was developed for use by online businesses. The new chat service provides a box on all library pages, allowing users to immediately type their questions. This positioned the chat reference service at the center of the user’s research space, much like the traditional reference desk. The chat service is even more engaging to the user than a reference desk, however, because it is configured to proactively offer assistance based on a set of predetermined criteria. Before the release of the new chat system, we received approximately seven chat questions per day. The first day after implementing the system, July 23, 2013, we re- ceived 43 questions. The following month, August 2013—typically a slow month for reference transactions—we received 444 chat questions; in September we received 1,440; and in October we received 1,791 questions through the chat service alone. Shortly after the implementation, we realized that, in addition to the increased volume, we were receiving many more complex chat questions than were received at the refer- ence desk. The libraries had moved to a tiered reference model in 2009 with primarily nonprofessionals staffing all service points. Now, because of the explosive growth in the number of complex questions, we were apparently facing a different environment. We needed a clearer understanding of the purpose of the reference transaction and the level of expertise required to effectively staff the new chat service. Literature Review Who should staff the reference desk? This question has framed a longstanding debate related to the complexity of questions and cost-effectiveness of librarians staffing service desks and, more recently, chat services. Throughout the first half of the twen- tieth century, the librarian at the reference desk was the established service model in academic libraries. However, major shifts in higher education, technology, and, most recently, in user expectations prompted librarians to reevaluate the traditional service model. A key change in these new models was the introduction of nonprofessional staff—and, in some cases, students—to provide direct reference services. This literature review includes a comparison of major studies that present empirical findings related to the types of questions received at service points and discusses question complexity in relation to staffing level. With each of these studies, we are particularly interested in understanding how researchers measured the complexity of questions and in the complexity of questions over time, as well as the impact that the changes in question complexity had on recommendations for reference desk and chat staffing. The review of seven published studies shows that varying classification schemes have been used to codify the types and complexity of questions; however, most clas- sification schemes were based on the approach proposed by Katz4 that categorized Implications for Practice 961 questions by assessing the collection knowledge and time required to answer the ques- tion. The basic assumption was that questions requiring deeper collection knowledge and more time to answer were more complex. Two classification schemes employed in the studies we reviewed had foundations other than the criteria proposed by Katz. Warner proposed a classification scheme based on the type of effort, whether skill or strategy-based, required to answer a question.5 Skill-based questions are closely tied to the mechanics of the research and often require a demonstration to answer the question. Strategy-based questions require more expertise related to subject-specific resources and the research process. Ryan proposed a classification scheme similar to Katz but strongly informed by the number of resources required to answer a question, with the assumption that the more resources consulted for a question, the higher the level of expertise required. 6 Although these classification schemes categorized questions based on difficulty, none explicitly connected question type to staffing level. In his discussion on the topic, Katz held firmly to the belief that all questions were best answered by librarians; however, in later years he conceded that “(1) the majority of queries are directional or ready-reference pure and simple; (2) generally, the queries and sources used are basic and easy to understand; and (3) most questions, therefore, could be answered by a well-trained person with a bachelor’s degree.”7 There are recognized issues related to the differences between classification schemes.8 To overcome these issues and provide a common frame of reference to compare the question complexity observed in the studies, we developed a matrix that explicitly tied the researchers’ recommendation of staffing level to question types. These mappings, although not appropriate for all purposes, supported our analysis of general trends. The recommendations fell into three general staffing levels, which we have labeled as “Nonprofessional,” including paraprofessionals and students; “Generalist,” including highly trained paraprofessionals and librarians; and “Librarian.” Although the term generalist has most often been used to describe a librarian without specific subject- based expertise, we decided to follow the vocabulary suggested by Bracke et al.,9 using the term to designate the group composed of both librarians and well-trained paraprofessionals who provide basic reference support. Table 1 provides an overview of the classification schemes used in the studies, combined with the recommended staffing levels.10 Using this expertise-based matrix, patterns emerge in the types of questions deemed appropriate for different staffing levels. • “Nonprofessional” questions are simple directional, technical, and policy ques- tions. The Warner classification includes questions that are based on skill and are answered the same way each time, the type of question that could easily be answered with a handout. These questions do not involve advanced expertise, and some are basic enough to be classified as directional questions based on the RUSA definition.11 There was agreement in the studies reviewed that this level of question did not require the expertise of a librarian. • “Generalist” questions include simple reference questions such as searching the catalog and databases. These questions require knowledge of the organization of the library and the use of multiple tools. In the studies we reviewed, there was no agreement on the appropriate staffing level for these questions. Most researchers made the case that well-trained nonprofessionals could adequately answer or refer these questions.12 The exceptions were the studies based on the READ Scale published by Ward,13 who designated this level as being appropri- ate for librarians, and Gerlich et al., who made no explicit recommendation.14 • “Librarian” questions require advanced expertise, either advanced subject 962 C ollege & R esearch Lib raries N ovem b er 2015 TABLE 1 Classification Level Nonprofessional Nonprofessional or Student Generalist Librarian or Nonprofessional Librarian Omaha Classification Based on Katz but relies heavily on the type of resource used to answer the question. Uses time as a measure of complexity. (Saint Clair, Aluri, and Pastine 1977) Directional and Instructional Some Reference Some and Extended Reference Directions to known items, how to locate items in the library, how to use the catalog, how to use an index, and support for library-oriented assignments. Ready reference, books on a topic, questions that use mate- rials in vertical files, questions that are repetitive from se- mester to semester. (Nonprofessional with referral) Require an interview and generally re- quire more than 5 minutes to answer. Warner Scale Based primarily on the skill-level required to answer the question, although time is a factor considered. (Warner 2001) (Henry and Neville 2008) Nonresource & Skill-based Strategy-based Consultation Do not require a resource to answer and might be answered by a sign or help sheet; may be answered by a demonstration or a well- developed set of directions. The same question gets the same answer every time. Require the formulation of a strategy and may require individualized subject approaches. (Nonprofessional with referral) Typically longer and more complex. Ryan Classification Based primarily on the number and type of resources used to answer a question. (Ryan 2008) Directional, Technology, and Lookup Some Reference Questions (involving few resources) Some Reference Questions Giving directions, quick Internet searches, technology, and quick lookups in the catalog. Determine if the library owns a journal, answers that can be provided with personal knowledge alone, guide to correct databases, help searching the catalog, help with citations, help searching databases. (Nonprofessional) Deemed potentially complicated enough to be referred to a librarian. Answered with 0–17 sources. READ Scale Based on Katz, uses knowledge, skills, and expertise required to answer questions. (Gerlich and Berard 2010) (Ward and Phetteplace 2012) Level 1–2 Level 3 Level 4-6 Require minimal knowledge, skills and exper- tise. Directional inquiries, call number inquir- ies, item location, minor computer help, gen- eral library or policy information. Require some time and effort. Require specific reference resources, basic instruction on searching the online catalog, direction to relevant subject bases, more complex technical problems. (Gerlich and Berard – no recommendation Ward – Librarian) Reference knowledge and skills needed, complex searches, services outside ref- erence, consultation, more cooperative in nature, “false leads,” interdisciplinary research, graduate research, and primary documents may be used. GVSU Categories Based on Katz but primarily uses the skill level required to answer the question, although resources used are a factor. (Bravender, Lyon, and Molaro 2012) Directional, Technical or Policy Ready Reference & Citations Reference Can be answered without library resources including computer-skill–based. Can be answered with one or two facts or with other brief information usually with reference to library resources, related to citation formatting and bibliographic manage- ment software. (Nonprofessional) Require the development of a strategy. Implications for Practice 963 knowledge or expertise related to the research process, including formulating research questions and developing paper topics. These questions are often poorly defined, requiring some discussion to uncover the true question. There was complete agreement among the studies that these questions require the expertise of a librarian. The generalist questions are the most difficult to assign uniformly to a staffing level and also pose the greatest challenges for developing cost-effective reference staffing models. The problem, as Katz points out, is that “Often the simple questions can develop into complex ones requiring professional aid.”15 Halldorrson estimated that 20 to 25 percent of questions posed during a reference transaction may not represent the user’s actual information need.16 In a study he conducted to compare librarians and nonprofessionals, he found that nonprofessionals were significantly more likely to answer questions as presented rather than probe for the actual information need and often failed to refer questions appropriately. As a further illustration of the profes- sional nature of the reference interview, Nordlie found that more than 60 percent of users change their topic during the interaction, underscoring the importance of broad research expertise in addressing this type of reference question.17 While nonprofes- sionals can be trained in the use of specialized information resources, it is difficult for them to develop the academic context and research expertise necessary to answer some reference questions through training alone.18 These apparently simple yet potentially complex reference questions may also pro- vide one of the most important teaching opportunities in the academy.19 The reference interview allows the librarian to understand where a student is in the research process and provide information that is specifically tailored to the student’s learning need.20 Kuhlthau describes the librarian’s opportunity for assisting students with research questions as a “zone of intervention,” explaining that “The zone of intervention is that area in which an information user can do with advice and assistance what he or she cannot do alone or can do only with great difficulty.”21 Several studies have examined the possibility of having nonprofessional staff work more independently, but results consistently showed high error rates for answers and for appropriate referrals.22 Halldorsson found that, in most of the unreferred cases, “the nonprofessional apparently did not refer because they did not detect the faulty information.”23 This failure to negotiate and understand the information need is fre- quently cited as a primary cause for error in the reference transaction.24 Research Questions To better understand the changing nature of chat reference and the implications it might have for staffing, we conducted a study that addressed the following questions: 1. Has the complexity of questions received at service points changed over time? 2. Are the questions received via chat systems more complex than those received at reference desks? 3. Does a proactive chat system increase the number and complexity of questions? The study was conducted in two parts. In the first part, we conducted a meta-analysis of data reported in published studies. The second part of the study included a direct analysis of data gathered at the libraries service points. Methodology Our literature review uncovered seven studies published between 1977 and 2012 that reported data on the complexity of questions received during reference transactions. Table 1, presented earlier in this paper, provides a means to compare questions from different time periods and classification schemes; it is the basis for our analysis of re- 964 College & Research Libraries November 2015 search questions 1 and 2. The columns of the matrix, “Nonprofessional,” “Generalist,” and “Librarian” correspond with the level of staff required to appropriately respond to the question. To provide a further level of analysis, we categorized the questions by the type of service point where the question was received, either Desk or Chat, and reported these results separately. For research question 3, we used the READ Scale to categorize the complexity of the questions received at our reference desks and through online chat.25 Table 1 provides an overview of the READ Scale along with other classification schemes that had been used to measure question complexity. We selected the READ Scale because it was used as a tool for analysis in the largest and most recent studies we reviewed, improving our ability to compare and generalize results.26 Two independent evaluators coded the questions based on the six levels of question difficulty described in the READ Scale. To calibrate the coding and reduce variability between the evaluators, the evaluators independently coded questions for the first week selected for the study. Results were compared and showed less than 3 percent disagree- ment in coding. For each of these cases, the question was discussed; and, in all cases, the evaluators came to agreement. The remaining questions were coded independently. All chat questions from six one-week periods during the fall 2013 and spring 2014 semesters were included in the analysis. One week per month was selected in Septem- ber, October, November, February, March, and April to provide a representative sample. In addition, all questions from the reference desks for three of these same weeks were analyzed. As a matter of regular library practice, desk questions were entered by staff using Springshare LibAnswers. Results and Discussion The results for each of the three research questions are presented and discussed in this section. Research Question 1: Has the complexity of questions received at service points changed over time? The analysis of the results for research question 1 demonstrated that the complexity of questions had changed over time. Figures 1 and 2 provide a summary of the studies reviewed. The results are reported as percentages so they can be easily compared, and they are listed chronologically by the date of publication. The number of questions analyzed and the year of publication are included in the chart title. Figure 1 shows a comparison of the types of questions received at the reference desk. The frequently cited study conducted by Saint Clair et al. stands as an outlier in the results, reporting that 62 percent of the questions asked at the desk could be ad- dressed by nonprofessional staff, 38 percent of the questions could be answered by a well-trained paraprofessional, and only 6 percent required librarian-level expertise.27 This study was conducted before reference tools and resources moved online, when the librarian was an essential part of the library research process. During this time, the frequency of simple questions was lower; but the frequency of generalist questions, those deceptively simple questions posing the biggest problems for nonprofessional staff, represented 32 percent of the questions received at the desk. The remaining results from studies published after 2001, when library information tools including the catalog, databases, and most scholarly journals had moved online, show similar patterns of question difficulty, with simple questions comprising between 74 and 90 percent of all questions received. Although studies that employ the Warner classification method show slightly higher rates of simple questions, the differences would not be likely to have an operational impact. A small number of questions with Implications for Practice 965 percentages ranging from 12 to 16 percent required a generalist, and an even smaller number, ranging from 0 to 11 percent, required the expertise of a librarian. To appropriately staff desks for this new less complex mix of questions, academic libraries developed differentiated service models where various services such as mul- timedia and government document support would only be offered at service points where nonprofessional staff had been appropriately trained to handle the narrow range of questions associated with the service.28 The gradual removal of librarians from ancillary service desks paved the way for the tiered reference model, where librarians at the reference desk were replaced by nonprofessionals who answered basic directional and reference questions and referred the difficult questions to a librarian.29 Removing the librarian from the desk was a difficult decision that was not made lightly; there was evidence that training nonprofessional staff to appropriately refer questions was problematic.30 However, the number of questions requiring the expertise of a librarian had dropped so significantly that the small risk that a ques- tion would not be appropriately answered or referred could not be a factor that was weighted heavily in service decisions. 31 There was never a question that help at the point of need was a valuable part of the educational process, but students were not asking as many difficult questions, and the librarians’ time could be better used in other areas. As the number of questions at service desks declined, librarians added chat refer- ence services and conducted studies to understand the staffing requirements for these services.32 The results, shown in figure 2, indicate more variation in the mix of ques- tions between studies, with the largest source of variation being the study conducted by Ward and Phetteplace that found higher rates of simple nonprofessional questions than other studies.33 Generalist questions ranged from 30 to 47 percent of all questions and 10 to 23 percent of the questions required the expertise of a librarian. FIGURE 1 Desk Questions by Recommended Staffing Level Nonprofessional Generalist Librarian 62% 32% 6% 86% 12% 2% 74% 15% 11% 90% 10% 0% 78% 15% 7% Saint Clair (1977) n=5,588 Warner (2001) n=14,080 Ryan (2008) n=6,959 Henry (2008) n=5,572 Gerlich (2010) 3-week n=7,652 Gerlich (2010) 15-week n=12,024 78% 16% 6% 62 % 32 % 6% 86 % 12 % 2% 74 % 15 % 11 % 90 % 10 % 0% 78 % 15 % 7% 78 % 16 % 6% Pe rc en ta ge o f Q ue st io ns 966 College & Research Libraries November 2015 Although these findings suggest a new pattern of questions with chat reference, due to the low volume of questions the findings did not change the researchers’ staff- ing recommendations in the studies we reviewed. Despite the finding that nearly one quarter of the questions received in the virtual environment required the expertise of a librarian (the highest percentage of any study reviewed), Bravender recommended that chat should be staffed by nonprofessionals because of low question volume.34 This point of view is not uncommon. Other libraries have chosen to staff virtual reference with nonprofessionals or even to discontinue chat reference service due to low use. In a 2006 multiple-case study, Radford and Kern reported on the discontinuance of nine chat reference services, with low volume being the most frequent reason for discontinu- ation.35 The exception to the low-use finding came from Ward and Phetteplace, who reported that chat reference had become the dominant service point in their library, with both a high volume of questions and high frequencies of generalist and librarian questions. As a result, despite having the highest percentage of nonprofessional ques- tions, their library continued to staff many reference service points with librarians.36 In the studies we reviewed where a recommendation for staffing was made, deci- sions were based on the need to use the expertise of librarians in the most efficient and cost-effective manner possible. It was clear that nonprofessional staff could be trained to answer questions related to search and library use. However, there was evidence that some reference questions, especially those posed as generalist questions but related to developing research topics, were problematic for nonprofessional staff. There was an acknowledged risk that nonprofessional staff might miss more complex questions related to the broader research context and might also miss the opportunity to sup- port learning at a teachable moment. However, librarians took these risks because the number of questions requiring specialized expertise was so low that the cost of correctly addressing the questions could not be justified. Research Question 2: Are the questions received via chat systems more complex than those received at reference desks? By differentiating between questions asked at the reference desk and questions asked via chat, we see evidence that users do ask more complex questions via chat. FIGURE 2 Chat Questions by Recommended Staffing Level Nonprofessional Generalist Librarian 33% 45% 22% 30% 47% 23% 34% 43% 23% Gerlich (2010) 3-week n=98 Gerlich (2010) 15-week n=317 Bravender (2011) n=1,476 Ward (2012) n=3,267 60% 30% 10% 33 % 45 % 22 % 30 % 47 % 23 % 34 % 43 % 23 % 60 % 30 % 10 % Pe rc en ta ge o f Q ue st io ns Implications for Practice 967 The trend of declining question complexity does not appear to hold true in the virtual environment. Figure 3 displays the weighted average frequency of questions received by staffing level. Within each of the staffing levels, the studies are grouped and averaged for comparison.37 The first group, labeled “1977,” contains only the study conducted by Saint Clair. This study is of particular interest because it was conducted in what could be considered the golden age of the reference desk, before the widespread adoption of online search tools and resources. The second group, labeled “Desk,” is a weighted average of the remaining studies included in figure 1 that report the findings for ques- tions received at the reference desk. The third group, labeled “Chat,” is an average of the studies included in figure 2 that report the complexity of questions received via chat. It is interesting to note that the overall pattern for chat services is similar to the pat- tern reported in 1977, with a lower percentage of nonprofessional questions (49%) and higher percentages of questions requiring the skills of a generalist (36%) or librarian (15%). The desk questions reflect the pattern that is frequently associated with questions at the point of need, with a very high percentage of nonprofessional questions (81%) and low percentages of generalist (13%) and librarian (15%) questions. Although the underlying differences in the designs of the studies make it impossible to use formal analyses to detect statistical differences, the results suggest that users tend to ask more complex questions when using chat services. Research conducted by Connaway, Dickey, and Radford on users’ preferred modes of reference service has demonstrated user preference for chat over other reference modes, principally due to its convenience.38 The results reported in figure 6 support their findings and further suggest that offering assistance in a convenient way may positively influence a user’s FIGURE 3 Comparison of Questions by Recommended Staffing Levels: 1977, Desk, and Chat Nonprofessional Generalist Librarian 1977 62% 32% 6% Desk 81% 13% 5% Chat 49% 36% 15% 62 % 32 % 6% 81 % 13 % 5% 49 % 36 % 15 % A ve ra ge W ei gh te d by S am pl e Si ze 968 College & Research Libraries November 2015 willingness to ask for more advanced research help. When online reference assistance was placed directly in the users’ research workflow, the mix of questions was similar to that seen in 1977, when the reference desk was central to the research process. Question 3: Does a proactive chat system increase the number and complexity of questions? The results of coding six weeks of data from the proactive chat system provide the basis of our analysis for research question 3. Table 2 shows the results for each week with questions categorized using the READ Scale Level. The weekly results indicate both heavy use and a consistently low percentage of “Level 1” questions, which av- erage only 4 percent in the proactive environment. “Level 2” questions account for approximately one third (30%) of questions, including questions such as known item searches and sending patrons the instructions to install a printer driver for the campus printing system. “Level 3” reference questions, those questions posing the thorniest problems for reference staffing models, make up 39 percent of the questions. Com- mon “Level 3” questions include finding peer-reviewed resources and formulating searches. Questions categorized as “Level 4” and above require advanced expertise such as finding business datasets and using specialized databases; these make up 27 percent of the questions received. To put the results into context, we conducted two additional analyses. We first compared these results to the same week in the semester of the previous year. Figure 4 presents the results of our analysis comparing three representative weeks of refer- ence desk questions to chat questions received during the same weeks. Due to the low numbers of READ category 6 questions, categories 5 and 6 were combined. The results of the two samples were then compared. The results of our analysis are shown in fig- ure 4. Twenty-one percent of chat questions were categorized as “Level 4” or above, compared to only 1 percent of the questions received at the desk. “Level 3” questions TABLE 2 Weekly Results with Questions Categorized Using the READ Scale Level READ Level 1 2 3 4 5 6 Total September ‘13 31 83 181 86 9 0 390 8% 21% 46% 22% 2% 0% 100% October ‘13 16 121 139 85 6 0 367 4% 33% 38% 23% 2% 0% 100% November ‘13 13 131 219 153 4 2 522 2% 25% 42% 29% 1% 0% 100% February ‘14 3 144 152 113 0 0 412 1% 35% 37% 27% 0% 0% 100% March ‘14 3 127 141 87 0 0 358 1% 35% 39% 24% 0% 0% 100% April ‘14 19 158 136 130 0 0 443 4% 36% 31% 29% 0% 0% 100% Total 85 764 968 654 19 2 2,492 3% 31% 39% 26% 1% 0% 100% Implications for Practice 969 were also more frequent in chat; 44 percent of questions received via chat fell into this category, compared with 15 percent of questions received at the desk. Only 13 percent of the chat questions were identified as basic “Level 1” questions, compared to 52 percent of questions asked at the reference desk. In this paper we have focused primarily on the mix of question types. It is also worth noting the extremely high volume of questions received following implementation of the proactive chat service. In the weeks reported in figure 4, 977 questions were received at the reference desk and 740 were received via chat. Since that time, chat volume has surpassed the number of questions asked at the reference desk to become the libraries’ predominant service point. The chat service receives an average of eight questions an hour during the regular semester and has resulted in a 40 percent increase overall in the number of reference questions received. To provide context beyond the local environment, we have mapped the percent- ages of questions received via the “Proactive” chat system to the “1977,” “Desk,” and “Chat” weighted averages reported in figure 3. Only approximately 30 percent of questions received via proactive chat were nonprofessional in nature, less than half the percentage of nonprofessional questions received in “1977” when the librarian was an integral part of library research, and lower than the percentages reported for the “Desk” or “Chat.” With “Proactive” chat, 67 percent of questions were reference questions appropriate for a generalist or librarian, the highest of all the groups we analyzed. In the proactive environment, 40 percent of the questions were generalist questions and 27 percent required the expertise of a librarian. These findings further illustrate a trend seen in previous virtual reference studies highlighting the importance of convenience.39 It appears that if students can easily ask questions online at the point when they are becoming involved in the research pro- cess, many will ask. The immediacy of chat reference enables the researcher to easily FIGURE 4 Complexity of Questions READ Scale Analysis Level 1 Level 2 Level 3 Level 4 Level 5 Desk 52% 32% 15% 1% 0% Chat 13% 22% 44% 19% 2% 52 % 32 % 15 % 1% 0% 13 % 22 % 44 % 19 % 2% Pe rc en ta ge o f Q ue st io ns 970 College & Research Libraries November 2015 contact the librarian while actively involved in research—at a time and place that is convenient for the researcher. In addition, online chat may be popular (and less threatening) because it makes the service interaction available to users in a way that is culturally more familiar and invit- ing, providing assistance in a manner similar to many other online chat services they encounter in daily life. At the reference desk, it is fairly common for students to preface a question with an apology such as, “I’m sorry to bother you,” or “I should already know the answer to this, but…” Interestingly, in the chat reference environment at the Univer- sity Libraries, apologies are almost never offered—students simply ask their questions. It appears, however, that the proactive nature of the chat service studied here may be as important as the convenience factor in getting users to ask reference questions. In October 2013, 58 percent of chats were initiated by a context-sensitive message that invited the user to chat based on the specific web page they were viewing. Because of the proactive configuration of the chat system, it is likely that the user had been offered help multiple times during the visit and that this constant invitation to chat reminded the user that help was available throughout the research process. The availability of chat transcripts allows us to gain deeper insight into questions that were asked based on proactive prompts from the system. For example, the tran- script below was triggered when a researcher remained on the “Find Databases” page without taking any action for more than 30 seconds. The transcript begins with a proactive invitation to chat. Librarian: Hi there. If you need help finding a database, let us know! (preconfigured prompt) Visitor: Yes, I am trying to find articles about types of fear or theories of fear but [Database name] is not really giving me what I need. Are there other databases that are more specific? FIGURE 5 Comparison of Questions by Recommended Staffing Levels: 1977, Desk, Chat, and Proactive Chat Library Assistant Generalist Librarian 1977 62% 32% 6% Desk 81% 13% 5% Chat 49% 36% 15% Proactive 33% 40% 27% 62 % 32 % 6% 81 % 13 % 5% 49 % 36 % 15 % 33 % 4 0% 27 % A ve ra ge W ei gh te d by S am pl e Si ze Implications for Practice 971 Librarian: Do you mean phobias? Visitor: Kind of I believe, I’m doing an Ad analysis and the principle of the ad is fear, so I am supposed to find articles on what is considered fear, how it [affects] people’s thoughts, things along those lines Librarian: Sounds like you are looking for basic background info on fear…does that sound right? Visitor: Yes, correct! Librarian: [makes a database recommendation with a link] Librarian: Search fear. There will be general articles on fear in general and specific articles on specific types of fear. Visitor: Thank you so much for your help.  Although database advice is given, there is also a supportive discussion related to the student’s assignment. In addition to providing a wealth of information for observing and understand- ing the proactive nature of the service, the availability of chat transcripts has given researchers the opportunity to gain a deeper understanding of these reference transac- tions. Oakleaf provides an overview of a number of studies in which transcripts show that most (more than 80%) of questions received at the reference desk involve at least some element of instruction.40 Duncan and Gerrard found that more than 45 percent of chat reference questions were related to research.41 These findings reinforce stud- ies of the research process conducted by Kulhthau, which reveal that “students need considerable guidance and intervention throughout the research process to construct a personal understanding. Without guidance, they tend to approach the research process as a simple collecting and presenting assignment that leads to copying and pasting with little real learning.”42 Taken together, the findings demonstrate that the reference transaction can result in the librarian providing guidance and advice, involving the librarian in a learner-centered approach to the research process that focuses on the active participation of the learner and experiential learning rather than on activities involving rote memorization. In reviewing the transcripts from the proactive chat system, we also noted that many of the questions were related to topics or some form of topic exploration, so we conducted an additional exploratory analysis of the “Level 3” and “Level 4” questions for one week in November 2013. We found that nearly 50 percent (185 of 372) of the questions were related to topic exploration. This suggests that point-of-need support for the research process, like the more formal aspects of information literacy instruc- tion, “demands greater sense-making and metacognition from the student” and less support for tool-based instruction.43 Without the barriers to access that existed in the past, questions are becoming more concept-based and complex. Summary and Implications for Practice After years of decline in both the number and complexity of questions, it appears that proactive online chat systems can provide an opportunity for libraries to reverse the trend. A review of published data shows that, perhaps because of convenience, users are willing to ask more complex reference questions through online services. However, for most of the studies reviewed, the volume of questions received through online chat remained low—so low, in fact, that some libraries could not justify staffing the service with librarians. Implementing the proactive chat reference service reported here dramatically increased the number of reference questions received. By analyzing chat transcripts and conducting a study of question complexity, we learned that the questions received 972 College & Research Libraries November 2015 through our proactive chat system were more complex than the questions we had been receiving at our service desks. We also discovered that the frequency of complex questions—often more than eight an hour—was too high for nonprofessional staff to efficiently refer to librarians. The results also supported the previous findings of others that many reference questions, even simple reference questions, provide an opportunity for broad support for the research process including topic development. These findings caused us to reevaluate our tiered service model. Although we could train nonprofessional staff to answer questions related to library use, support for the complex chat questions required a broad understanding of the research process, the breadth and depth of the thousands of information resources available, as well as topic development and refinement. This breadth of knowledge is beyond what would normally be expected of a person working in a nonprofessional position; in fact, seeing librarians’ competencies demonstrated through months of chat transcripts illustrates what we have always known about the work: it is a profession, not a job. The opportunity to once again provide individual reference service at the point of need reopens the longstanding debate about how to appropriately staff the service. As content becomes increasingly available online, it is more important than ever to provide students with a strong foundation of information literacy concepts. Curricular integration and classroom instruction provide students with information before they embark on a research assignment. Because content is more readily accessible, informa- tion literacy instruction has shifted to broader concepts of information evaluation and use. Having a librarian available to provide guidance and advice at a teachable moment, reinforcing and tailoring the research concepts for the individual learner, may well be a component that has been missing for many students as they develop critical think- ing skills related to the use of information. However, this level of individual service is expensive and will need to be weighed against competing priorities. Offering a convenient, proactive chat service has demonstrated that users still do have questions about the broader aspects of research. The increases in the number and complexity of chat questions seen at this library appear to challenge assumptions that the decline in reference questions is a natural and reasonable result of improved discovery tools and more effective online access. Perhaps the answer to the question, “What has happened to the more complex reference questions, the questions requiring broader research expertise?” is that these questions have been there all along; however, in the online search environment, reference service must be proactive, convenient, and expert to meet user expectations and research needs. Even if a proactive chat service is implemented, the demand for online reference support for complex questions could remain hidden for years if libraries do not staff the service appropriately. After spend- ing more than a decade moving librarians away from the reference desk and more recently away from chat reference, new evidence about user preferences and students’ increasing use of chat reference to support learning may encourage academic libraries to reconsider the reference staffing model. Notes 1. Association of College and Research Libraries, “ACRL 2012 Academic Library Trends and Statistics. Doctoral-Granting Volume,” 2013; for example: Marianne Stowell Bracke, Michael Brewer, Robyn Huff-Eibl, Daniel R. Lee, Robert Mitchell, and Michael Ray, “Finding Information in a New Landscape: Developing New Service and Staffing Models for Mediated Information Services,” College & Research Libraries 68, no. 3 (2007): 248–67; Bella Karr Gerlich and G. Lynn Be- rard, “Testing the Viability of the READ Scale (Reference Effort Assessment Data)©: Qualitative Statistics for Academic Reference Services,” College & Research Libraries 71, no. 2 (2010): 116–37; Deborah B. Henry and Tina M. Neville, “Testing Classification Systems for Reference Questions,” Implications for Practice 973 Reference & User Services Quarterly 47, no. 4 (2008): 364–73; Susan M. Ryan, “Reference Transactions Analysis: The Cost-Effectiveness of Staffing a Traditional Academic Reference Desk,” Journal of Academic Librarianship 34, no. 5 (2008): 389–99; Debra G. Warner, “A New Classification for Refer- ence Statistics,” Reference & User Services Quarterly 41, no. 1 (2001): 51–55. 2. For example: Keith Ewing and Robert Hauptman, “Is Traditional Reference Service Obso- lete?” Journal of Academic Librarianship 21, no. 1 (1995): 3–6; David W. Lewis, “Traditional Reference Is Dead, Now Let’s Move on to Important Questions,” Journal of Academic Librarianship 21, no. 1 (1995): 10–13; Charles R. Martell, “The Ubiquitous User: A Reexamination of Carlson’s Deserted Library,” portal: Libraries and the Academy 5, no. 4 (2005): 441–53. 3. For example: Dan Berrett, “Freshman Composition Is Not Teaching Key Skills in Analy- sis, Researchers Argue,” Chronicle of Higher Education (Mar. 21, 2012), available online at http:// chronicle.com/article/Freshman-Composition-Is-Not/131278/ [accessed 7 September 2014]. 4. William A. Katz, Introduction to Reference Work (Boston: McGraw-Hill, 2002). 5. Warner, “A New Classification for Reference Statistics,” 51–55. 6. Susan M. Ryan, “Reference Transactions Analysis: The Cost-Effectiveness of Staffing a Traditional Academic Reference Desk,” Journal of Academic Librarianship 34, no. 5 (2008): 389–99. 7. Katz, Introduction to Reference Work, 21. 8. For example: Deborah B. Henry and Tina M. Neville, “Testing Classification Systems for Reference Questions,” Reference & User Services Quarterly 47, no. 4 (2008): 364–73. 9. Bracke, Brewer, Huff-Eibl, Lee, Mitchell, and Ray, “Finding Information in a New Land- scape,” 248–67. 10. The research by Bracke, Brewer, Huff-Eibl, Lee, Mitchell, and Ray, “Finding Information in a New Landscape,” is referenced throughout the discussion related to the rubric; however, their classification scheme is not included in the rubric, and their results are not included in figures related to the rubric, because they did not report their results based on the classification scheme. Instead, they reported results based on an expert’s judgment of the level of staff that could best answer the question. 11. Reference and User Services Association, Definitions of Reference 2008, available online at www.ala.org/rusa/resources/guidelines/definitionsreference [accessed 7 September 2014]. 12. Bracke, Brewer, Huff-Eibl, Lee, Mitchell, and Ray, “Finding Information in a New Land- scape,” 248–67; Patricia Bravender, Colleen Lyon, and Anthony Molaro, “Should Chat Reference Be Staffed by Librarians? An Assessment of Chat Reference at an Academic Library Using Libstats,” Internet Reference Services Quarterly 16, no. 3 (2011): 111–27. 13. David Ward and Eric Phetteplace, “Staffing by Design: A Methodology for Staffing Refer- ence,” Public Services Quarterly 8, no. 3 (2012): 193–207. 14. Gerlich and Berard, “Testing the Viability of the READ Scale,” 116–37. 15. Katz, Introduction to Reference Work, 21. 16. Egill A. Halldorsson, “The Performance of Professionals and Nonprofessionals in the Reference Interview,” College and Research Libraries 38, no. 5 (1977): 385–95. 17. Ragnar Nordlie, “‘User Revealment’: A Comparison of Initial Queries and Ensuing Question Development in Online Searching and in Human Reference Interactions,” Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 11-18, doi:10.1145/312624.312618. 18. For example: Marjorie E. Murfin and Charles A. Bunge, “Paraprofessionals at the Reference Desk,” Journal of Academic Librarianship 14, no. 1 (1988): 10–14; Halldorsson, “The Performance of Professionals and Nonprofessionals in the Reference Interview,” 385–95. 19. James K. Elmborg, “Teaching at the Desk: Toward a Reference Pedagogy,” portal: Libraries and the Academy 2, no. 3 (2002):455–64, doi:10.1353/pla.2002.0050. 20. Megan Oakleaf and Amy VanScoy, “Instructional Strategies for Digital Reference: Meth- ods to Facilitate Student Learning,” Reference & User Services Quarterly 49, no. 4 (2009): 380–90, doi:10.2307/20865299. 21. Carol C. Kuhlthau, “Seeking Meaning: A Process Approach to Library and Information Services,” Libraries Unlimited (2004): 129. 22. Murfin and Bunge, “Paraprofessionals at the Reference Desk,” 10–14; Halldorsson, “The Performance of Professionals and Nonprofessionals in the Reference Interview,” 385–95. 23. Halldorsson, “The Performance of Professionals and Nonprofessionals in the Reference Interview,” 393. 24. Ian Douglas, “Reducing Failures in Reference Service,” RQ 28, no. 1 (1988): 94–101; Chris- topher W. Nolan, “Closing the Reference Interview: Implications for Policy and Practice,” RQ 31, no. 4 (1992): 513–23; David Ward, “Measuring the Completeness of Reference Transactions in Online Chats: Results of an Unobtrusive Study,” Reference & User Services Quarterly 44, no. 1 (2004): 46–56, doi:10.2307/20864287. 25. Bella Karr Gerlich, “The Read Scale,” available online at http://readscale.org/read-scale. 974 College & Research Libraries November 2015 html [accessed 7 September 2014]. 26. Gerlich and Berard, “Testing the Viability of the READ Scale,” 116–37; Ward and Phetteplace, “Staffing by Design,” 193–207. 27. Jeffrey W. Saint Clair, Rao Aluri, and Maureen Pastine, “Staffing the Reference Desk: Professionals or Nonprofessionals,” Journal of Academic Librarianship 3, no. 3 (1977): 149–53. 28. William L. Whitson, “Differentiated Service: A New Reference Model,” Journal of Academic Librarianship 21, no. 2 (1995): 103–11. 29. Virginia Massey-Burzio, “Rethinking the Reference Desk,” in Rethinking Reference in Aca- demic Libraries, ed. Anne Grodzins Lipow (Berkeley, Calif.: Library Solutions Press, 1992), 6. 30. Debbi Dinkins and Susan M. Ryan, “Measuring Referrals: The Use of Paraprofessionals at the Reference Desk,” Journal of Academic Librarianship 36, no. 4 (2010): 279–86. 31. For example: Bracke, Brewer, Huff-Eibl, Lee, Mitchell, and Ray, “Finding Information in a New Landscape,” 248–67; Ryan, “Reference Transactions Analysis,” 389–99; Patricia Bravender, Colleen Lyon, and Anthony Molaro, “Should Chat Reference Be Staffed by Librarians? An As- sessment of Chat Reference at an Academic Library Using Libstats,” Internet Reference Services Quarterly 16, no. 3 (2011): 111–27. 32. Bravender, Lyon, and Molaro, “Should Chat Reference Be Staffed by Librarians?” 111–27; Gerlich and Berard, “Testing the Viability of the READ Scale,” 116–37; Ward and Phetteplace, “Staffing by Design,” 193–207. 33. Ward and Phetteplace, “Staffing by Design,” 193–207. 34. Bravender, Lyon, and Molaro, “Should Chat Reference Be Staffed by Librarians?” 111–27. 35. Marie L. Radford and M. Kathleen Kern, “A Multiple-Case Study Investigation of the Dis- continuation of Nine Chat Reference Services,” Library & Information Science Research 28 (2006): 527. 36. Ward and Phetteplace, “Staffing by Design,” 193–207. 37. Because of the differences in sample sizes among the studies, we used weighted averages to report and analyze findings. 38. Lynn S. Connaway, Timothy J. Dickey, and Marie L. Radford, “‘If It Is Too Inconvenient I’m Not Going after It’: Convenience as a Critical Factor in Information-Seeking Behaviors,” Library & Information Science Research 33, no. 3 (2011): 187-88. 39. Ibid. 40. Oakleaf and VanScoy, “Instructional Strategies for Digital Reference,” 381; Lesley M. Moyo, “Virtual Reference Services and Instruction: An Assessment,” Reference Librarian 46, no. 95/96 (2006): 213–30. 41. Vicky Duncan and Angie Gerrard, “All Together Now! Integrating Virtual Reference in the Academic Library,” Reference & User Services Quarterly 50, no. 3 (2011): 280–92. 42. Carol C. Kuhlthau, “Rethinking the 2000 ACRL Standards: Some Things to Consider,” Communications in Information Literacy 7, no. 2 (2013): 95. 43. Kara Malenfant, “ACRL Seeks Feedback on Revised Framework for Information Literacy for Higher Education,” available online at www.acrl.ala.org/acrlinsider/archives/8911 [accessed 7 September 2014].