ACRL News Issue (B) of College & Research Libraries 256 / C&RL News ▀ A p ril 2003 C o l l e g e & R e s e a r c h L i b r a r i e s new s Assessment of reference instruction as a teaching and learning activity An experiment at the University of Illinois-Springfield by Denise D. Green and Jan is K. Peach R eference service happens w hen a librarian interacts with a library user to answer ques­ tions and assist in the research process. It can place face-to-face at a traditional library reference desk or, increasingly, through e-mail, live-chat or virtual software programs, and by telephone. Ref­ erence has been a key academic library service for over 100 years, w ith librarians instructing stu­ dents in the use of card catalogs, indexes, and now databases. The context of reference transac­ tions usually differs from classroom library or bib­ liographic instruction (BI). In BI there is a planned instructional w orkshop or guest lecture with a specific time and often a specific outcome, usually an assignment or term paper. Reference service is m ore random, w ith the user choosing w hen to approach a librarian who usually has not had time to prepare specific information on the students’ course or research assignment. Reference transac­ tions can be one-time or ongoing interactions, with patrons returning for help as their project devel­ ops. The literature on reference evaluation is ex­ tensive. Several review essays are available includ­ ing Charles Bunge’s 1994 essay “Evaluating Refer­ ence Services and Reference Personnel: Questions and Answers from the Literature”1 as well as other views by Lisa Smith2 and Jerry Campbell3. The traditional technique to m easure and evaluate reference service was a simple tally of questions that were sorted by length of question, time of day, and day of the week. Also notable among published reference evaluation case stud­ ta ies are user interviews', focus groups,5 and a mix of survey and observational techniques.6 ke In 1999, the authors, academic librarians at the University of Illinois-Springfield (UIS), ex­ perimented with evaluating reference service as a teaching and learning activity. To do this, w e de­ signed an assessment instalment that attempted to measure patrons’ attitudes about learning from reference interaction and applied it in a medium­ sized academic library. The three types of reference evaluation literature Most of the reference evaluation literature falls into three categories. The first category is the un­ obtrusive approach or “55 percent school,” best kn ow n from the studies o f Peter H ernan and Charles McClure.7 8 Typically, persons posing as library patrons ask a series of predetermined fac­ tual questions either in person o r by telephone. The reference service is evaluated on the accuracy of die responses to these questions, which aver­ aged only 55 percent correct. The unobtrusive reference evaluation process assumes a model of reference work as answering discrete inquiries with right and wrong solutions. This approach has cer­ tain advantages since providing accurate answers is one goal of high-quality reference service, but critics o f H ernan and McClure have pointed out that often library patrons do not ask discrete in­ quiries with right and wrong answers. Also, as in any communication activity, how the answ er is conveyed can be as important to the library pa- About the authors Denise D. Green is coordinator o f reference and Janis K. Peach is collection developm ent coordinator in Brookens Library at the University o f Illinois-Springfield, e-mail: green.denise@uis.edu and peach.jan@uis.edu mailto:green.denise@uis.edu mailto:peach.jan@uis.edu C&RL News • A p ril 2003 / 257 tron as what information is delivered during the reference transaction.91011 The second category of reference evaluation literature focuses on the interpersonal communi­ cation process. This category is most widely known by the studies of Joan Durrance and her col­ leagues.1213 Durrance was reacting in part to the “55 percent school” focus on accuracy regardless of departmental or inteipersonal variables by fo­ cusing on the willingness of the patron to reaim to the same staff member in the future, implying a more complex model of reference than Heman and McClure’s studies. Critics of the inteipersonal communication model argue that wrong answers, however charmingly delivered, are still wrong an­ swers and not high-quality reference. The third category of literature on the evalua­ tion of reference is based on the Wisconsin-Ohio Reference Evaluation Program developed by Charles Bunge and Marjorie Murfin. This widely used instniment assesses user satisfaction and the conditions of the reference transaction.14-15 The Wisconsin-Ohio Reference Evaluation Program uses a two-part evaluation form for each refer­ ence transaction, with part one being answered by the patron and part two by the librarian, to allow for such variables as the number of resources used by the patron, how busy the library was, die train­ ing of the librarian, and the subject area o f the inquiry. It is the only assessment instrument available today that is externally validated, allowing a ref­ erence department’s perfonnance to be compared to data aggregated from over 100 other libraries. The Wisconsin-Ohio Reference Evaluation Pro­ gram was designed primarily for the evaluation of an entire department rather than an individual librarian’s performance, with the implied notion that the model of reference service is a complex activity that results in user satisfaction or dissatis­ faction. While each of these methods may be useful for evaluating reference performance in terms of accuracy and patron satisfaction, none serves to offer proof of the teaching activity of librarians. Since librarians are on the front lines of teaching, providing infonnation literacy and research skills to students seeking help at the reference desk, the authors set out to test an instrument to measure this area of performance. Ev alu a tio n at UIS At UIS, reference (or instructional services) li­ brarians have full faculty status and are evaluated for tenure, reappointment, and promotion by the same criteria as classroom faculty. One evaluation requirement is proof of “teaching excellence” by student evaluations, letters, and other documen­ tation. In 1995, we used die Wisconsin-Ohio pro­ gram to evaluate the department; while gratifying to learn that w e rated 73.5 percent versus the national average of 69 percent, this data did not significantly help individual library faculty in their personnel process. In 1997, the reference depart­ ment again used the Wisconsin-Ohio Reference Evaluation, tallying the questions separately for each librarian. While results w ere som e­ w h at useful, they still did n o t evaluate indi­ vidual teaching. When teaching credit courses, library faculty use the campus-wide evaluation instrument to document and assess our teaching. We also devel­ oped an evaluation instrument for BI sessions. However, the significant absence o f an assess­ ment of our main teaching activity was a problem for library faculty in the reappointment, tenure, and promotion process. Infonnation desk or ref­ erence teaching is 20 percent or more of a typical UIS instructional services librarian’s time and re­ mains an under-documented teaching activity for us and, indeed, for most academic librarians. The authors were therefore very motivated to document reference as a teaching and learning ac­ tivity. We based our survey on our literature re­ view and experience with the Wisconsin-Ohio forms; on advice from die campus Personnel Poli­ cies Committee, we also added questions similar to those on the classroom teaching evaluation form, using Likert scale responses. The form was given only to library patrons w ho had fairly complex reference questions. To ensure confidentiality, re­ spondents placed the completed forms in a locked wooden box, and a support staff member in the reference department tabulated results. The survey was administered by the authors in the Brookens Library Reference Department at UIS as a pilot project in February an d March 2000 and again on a larger scale from Septem­ ber to the end of November 2000. Founded in 1970 as Sangamon State University, UIS is an upper-division undergraduate and graduate uni­ versity w ith en ro llm en t o f ap p ro x im ately 3,800 students an d 160 faculty in 2000. The average age o f our student body is 33, and 65 percent are part-time students. Only about 10 percent of the UIS student body lives on cam­ pus. T he UIS curriculum offers a variety of undergraduate majors, graduate programs, and cer­ 258 / C&RL News ■ April 2003 tificates with an emphasis on public affairs and degree completion programs. The results The summary of data from both the pilot and fall surveys shows a high degree of satisfaction with the reference teaching process by students and other library users. Our respondents were 50 per­ cent undergraduate and 30 percent graduate UIS students, with the balance made up of high school, community college, and other university students, plus members of the community. The percentages from questions designed to rate actual teaching were very high: 92 percent agreed that they learned something new about how to do research, 95 per­ cent agreed that they learned more about using the library resources, 85 percent agreed that re­ search skills increased, and 87 percent rated the librarians’ quality as a teacher as high. Several of the questions w ere designed to rate the librarians’ com munication skills and knowledge, and the “comfort level” of the pa­ tron: 93 percent agreed that the librarian had knowledge and communication skills to teach research, 89 percent agreed that the instruction would help them succeed with their research and writing, 91 percent agreed that they felt more comfortable using the library after this encoun­ ter, and 98 percent would definitely ask the li­ brarian for help again. The authors feel that these results show a promising method of evaluating individual teach­ ing at the reference desk; however, the small amount of data collected at UIS is not a substan­ tial test o f this instrument. The authors offer other academic librarians the chance to use our instrument, and would appreciate feedback from any libraries collecting data. We welcome its use at a variety of academic libraries, especially since the size and curriculum of UIS is unique. The dilemma of academic library faculty de­ scribing what we do as a teaching and learning activity is widespread. Certainly there is a great n eed for this type of assessment of academic reference and a need for the recognition of refer­ ence as a valuable teaching activity of library faculty. More research is needed to assess and document the teaching of research skills as a com­ ponent of reference, to show it as a legitimate educational activity Notes 1. Charles A. Bunge, “Evaluating Reference Services and Reference Personnel: Questions and Answers from the Literature,” The Reference Li­ brarian no. 43 (1994): 195-207. 2. Lisa L. Smith, “Evaluating the Refer­ ence Interview: A Theoretical D iscussion of the Desirability and Achievability o f Evalua­ tio n ,” RQ 31 (1991): 75-81. 3. Jerry D. Campbell, “Clinging to Tradi­ tional Reference Services,” Reference a n d User Services Quarterly 39 (2000): 223-227. 4. Jennifer M endelsohn, “Perspectives on Quality of Reference Service in an Academic Library: A Q ualitative Study,” R Q 36 (1997): 544-57. 5. Virginia Massey-Burzio, “From the Other Side o f the Reference Desk: A Focus G roup Study ‚” Journal ofAcadem ic LibrarianshiplA, no. 3 (1998): 208-15. 6. Elaina Norlin, “Reference Evaluation: A Three-Step Approach—Surveys, Unobtrusive Observations, and Focus G roups,” College & Research Libraries 61 (2000): 546-53- 7. Peter H ernon and Charles R. McClure, “Unobtrusive Reference Testing: The 55 Per­ cent Rule,” Library Journal 111 (April 15,1986): 37-41. 8. Peter H ernon an d Charles R. McClure, “Library Reference Service: An Unrecognized Crisis-a S ym posium ,” J o u r n a l o f A ca d em ic Librarianship 13 (1987): 69-80. 9. Lori Goetsch, “Reference Service is More Than a Desk "Journal o f Academic Librarianship 21 (1995): 15-16. 10. Carolyn W. Jardine, “Maybe the 55 Per­ cent Rule Doesn’t Tell the Whole Story: A User- satisfaction Survey,” College & Research Librar­ ies 56 (1995): 477-85. 11. LorieneRoy, “Reference Accuracy,” 7¾e ReferenceLibrarianno. 49/50 (1995): 217-27. 12. Joan C. Durrance, “Reference Success: D oes th e 55 P ercent Rule Tell the W hole Story?" Library J o u r n a l 11A (April 15, 1989): 31-36. 13. Jo an C. D urrance, “Factors the Influ­ ence Reference Success: What Makes Q ues­ tioners Willing to Return?” The Reference Li­ brarian no. 49/50 (1995): 243-65. 14. Marjorie E. Murfin, “Evaluation of Reference Service by User Report of Success,” The Reference Librarian no. 49/50 (1995): 229- 241. 15. John C. Stalker and Marjorie E. Murfin. “Quality Reference Service: A Preliminary Case Study "Journal o f A cadem ic Librarianship 22 (1996): 423-29. ■