College and Research Libraries LARRY HARDESTY, NICHOLAS P. LOVRICH, JR., AND JAMES MANNON Evaluating Library-Use Instruction Although library-use instruction programs have become popular during the 1970s, they are often not given the same type of support by library and col- lege or university administrators as the more traditional library services. The paper contends that appropriate evaluation is an important element in gaining this support and describes the development and results of a system- atic assessment of library-use instruction at DePauw University. All social institutions or subsystems, whether medical, educational, religious, economic, or political, are required to provide "proof' of their legitimacy and effectiveness in order to justify so- ciety's continued support. 1 During the 1970s library-use instruction has enjoyed a renewed popularity. 2 Each passing year has brought reports from more and more academic libraries that they too have established library-use instruction pro- grams. A survey of the number of confer- ences on this topic and the articles pub- lished in the library literature confirms that library-use instruction has been revived. Despite these developments, few academic libraries accord library-use instruction pro- grams the same degree of importance as the more traditional services of the library. Often library-use instruction programs are an "extra" service that a few librarians or even a single librarian, working at the lArry Hardesty is head of the reference de- partment, Roy 0. West Library, DePauw Univer- sity, Greencastle, Indiana. Nicholas P. Lovrich, Jr., is assistant professor of political science, Washington State University, Pullman. James Mannon is" assistant professor of sociology, De- Pauw University. The authors acknowledge the support of the Council on Library Resources through its Library Service Enhancement Pro- gram grant to DePauw University during 1976- 77, a grant that made possible the library-use in- struction program and assessment described here . The authors also acknowledge the assistance of Kathleen Owens, reference librarian at DePa~w University in 1976-77 under the terms of the grant, for her role in the development of the program. grass-roots level, have been willing to pro- vide in addition to their other respon- sibilities. As a result, there is a history of library-use instruction programs floundering as the librarians responsible for them have changed positions or simply lost their initial enthusiasm when the work load became too great a burden. A common concern among librarians promoting library-use instruction programs is how to gain the same kind of continuing support from the library and college or uni- versity administration as that received by traditional library services such as catalog- ing, circulation, acquisitions, and reference. Certainly there are many ways of seeking this support, but a most important method that should be part of any library-use instruction program is that of systematic evaluation. In the field of education, evaluation is customarily divided into two types: forma- tive evaluation and summative evaluation. Familiarity with the rationale and tech- niques associated with both types of evalua- tion is important for librarians developing library-use instruction programs. This article deals with both aspects of evaluation but fo- cuses primarily upon summative evaluation. Formative evaluation is concerned with the development of a program and is useful in making methods of instruction more effec- tive. Summative evaluation deals with ef- forts to assess the overall effectiveness of a program and to gain additional support for the program. 3 This does not mean the same test or questionnaire cannot be used for both types I 309 310 I College & Research Libraries • July 1979 of evaluation. However, most formative evaluation is intended to provide short-term feedback, and the program developer often does not have time for a highly sophisti- cated statistical analysis or for other complex procedures to be carried out. Formative evaluations often are simplified, more error-prone versions of tests nr question- naires used for later summative evalua- tions. 4 Also, a good summative evaluation will have formative implications . Summative evaluation reports are usually directed toward those individuals who set policy at various levels, such as library and college or university administrators. It is these individuals who decide whether to continue funding a program or whether to increase or decrease support available for a program . It is this type of evaluation that li- brarians will find most useful in gaining fur- ther support for their library-use instruction programs, and this article provides an example of the systematic summative evalu- ation of a library-use instruction program at one academic library. In May 1976 the Council on Library Resources awarded DePauw University a one-year grant as part of the Council's Li- brary Service Enhancement Program . The Council on Library Resources established this grant program to "stimulate additional activities intended to result in the more imaginative , effective involvement of the academic library in the teaching/learning program. "5 Using the information available in the literature provided by such leaders in library-use instruction as Knapp, Farber, Wiggins, Hackman, and others, 6 -10 one of the authors initiated the development of a library-use instruction program at DePauw University beginning in the fall of 1976. The program will be described only briefly here since it has been reported in more detail elsewhere. 11 The librarian plan- ning the program placed considerable em- phasis on a common instructional experi- ence for freshman students through which they would develop a basic level of library skills and a positive attitude toward the academic library. He intended that the freshman library-use instruction program would serve as the foundation for more ad- vanced library-use instruction later in the academic careers of students. It is not yet possible to conduct a summa- tive evaluation on the role of the freshman library-use instruction program as the foun- dation for more advanced library-use in- struction. However, a summative evaluation of the freshman part of the program in terms of the development of basic library skills and positive attitudes toward the li- brary has been completed. The efforts made to evaluate this part of the library-use in- struction program at DePauw University are reported in this article. Included is informa- tion on how the evaluators considered each of the major areas of summative evaluation, such as the objectives of the program, test selection, design of the evaluation, and statistical analysis . THE DEPAUW LIBRARY-USE INSTRUCfiON PROGRAM The freshman library-use instruction pro- gram consisted of the following elements. First, during the beginning week of each semester a librarian presented a brief slide lecture to each of the freshman English and basic co"mmunication classes. This presenta- tion introduced the students to the person- nel, collections, and services of the library. At the end of the presentation , the librarian made the students aware of a self-guided tour pamphlet that could be obtained near the main entrance of the library so students could tour the library at their own conven- ience . The purpose of this presentation was to give students very early in their academic career the impression that the library con- tained a wealth of resources and a variety of services, and that there were people in the library to help them IJlake use of these ser- vices and resources. In short, the presenta- tion was not intended to promote skill de- velopment, but rather concentrated on fos- tering a positive attitude on the part of the students toward the academic library. The second element of the program oc- curred later in the semester and concen- trated on skill development. For instruc- tional purposes, two forty-five minute slide presentations, accompanied by instructional booklets and worksheets, were given to each of the freshmen in English and basic communication classes. These presentations provided the student with instruc'tions on how to make introductory use of each of the major collections of the library and how to develop a basic search strategy to obtain in- formation needed for compositions that were part of the usual course requirements. This second element of the program had rather modest objectives. The instruction emphasized actual library use and finding information in each of the major collections for use in the writing of compositions. While a variety of topics were discussed, the librarians making the presentations spent relatively little time on details such as the constituent parts of the catalog card or elements of Readers' Guide citations. In- stead, the librarians emphasized · the type of information that could be obtained from each of the major library collections and the development of a search strategy. From the conception of the program a concern existed both for formative and summative evaluation. Formative evaluation was needed to provide information for the continued improvement of the program. The completed worksheets, interviews with instructors, and questionnaires completed by the students provided information for this type of evaluation. At that point the need for immediate feedback for modifica- tion of the program did not lend itself to the development of rigorous tests and question- naires more appropriate for summative evaluation. However, since only a one-year grant supported the program, the developer rec- ognized that summative evaluation would be very important to gain the necessary sup- port from the library and university admin- istration in order to continue the program. It would ta,l<:e more than the results from hastily made questionnaires or impressions gained from interviewing instructors to con- vince the administrators. A major commit- ment of time and resources therefore was devoted to the development of summative evaluation procedures. The librarians involved in the program considered several methods of evaluation, including the more traditional measures of library services such as reference and circu- lation statistics. They decided that, in addi- tion to these methods of evaluation, a paper-and-pencil type test could provide . useful information for summative evaluation Library-Use Instruction I 311 purposes concerning the skills and attitudes the students developed as a result of the program. 12 A survey of library literature revealed very little helpful information on available tests. As no .ed by Bloomfield, many of the more popular published tests place consid- erable emphasis on such details as parts of the catalog card and elements of Readers' Guide citations. 13 These tests did not ap- pear to match very closely the goals and ob- . jectives of the program being developed at DePauw University. Carolyn Kirkendall, director of the Project LOEX Clearinghouse at Eastern Michigan University, provided a number of locally prod.uced library-use tests. ·The quality and objectives of these tests varied greatly, and their usefulness was questioned since there was no information available concerning their development. (This concern proved to be well founded considering the number of seemingly rea- sonable questions that later proved unreli- able.) Finally, in an effort to gain the benefit of both their expertise and their ob- jectivity, two professors from DePauw University-one in political science and one in sociology-were employed in the efforts to develop a useful library-use test. CREATING A RELIABLE EVALUATION INSTRUMENT An understanding of how the authors fashioned a reliable and valid systematic evaluation design can be acquired by re- viewing the methodology in a step-by-step manner. The framework for the evaluation · consists of four basic parts. The first consisted of considering the test in terms of validity and reliability; these are · the two major criteria in education in defin- ing the quality of an evaluation design. Validity concerns the question of whether the test measures what it purposes to meas- ure. Validity in the measurement of test items can be determined by a variety of methods, many of which can be quite com- plex and time-consuming. In this case valid- ity of measurement was obtained through the criterion of face validity. 14 Reliability concerns the consistency of measurement observed over repeated as- sessments. An instrument may be unreliable 312 I College & Research Libraries • july 1979 because the characteristics being measured may be unstable, or the procedures may change from one application to another. This criterion~was particularly important in this study, and the method used to develop a reliable. test is discussed in more detail later in this article. After a test of the items of measurement for reliability and validity, the second step involved the extraction and combining of the most valid and reliable items into a library-use questionnaire administered to samples of freshman English students before library-use instruction began. Third, the administration of the instruc- tion program took place in some of the pre- tested classes-with others being left to serve as "control" classes wherein no in- structional intervention occurred. Fourth, it was then possible to compare pre- and posttest results on library-use at- titudes and skills for the test and control classes to evaluate the effects of the instruc- tional program. Such quasi-experimental designs require that instruments used to measure the effects of instruction be reliable. 15 Since the critical issue for our study was that of assessing the effects of instruction, it was absolutely necessary to make certain that observed changes were not the result of undue var- iability of the testing instrument. One of the most positive features of this study is that of the pretesting of the evaluation instrument for reliability. As previously mentioned, few library-use competence tests found in the literature have adequately addressed this problem. The test-retest method was used to de- termine the reliability of items on the library-use attitud~s and skills test. 16 The original draft of the evaluation test consisted of ten attitudinal items concerning student library utilization and twenty-six ' items to test library-use competence . This test was administered to 102 freshman students at DePauw University enrolled in a variety of introductory freshman-level courses the semester prior to the beginning of library- use instruction on a universitywide basis. Three weeks later in the semester the identical test was administered to the same students; their test-retest responses were carefully compared. The attitudinal items were assessed for reliability using item analysis and Pearson correlation coefficients. Items that demonstrated either a positive or negative trend from test to retest and items with a correlation coefficient of less than . 70 were considered unreliable and hence dropped from the test. 17 Items that are con- sistent on both the aggregate (i.e., do not generate either higher or lower mean re- sponses) and the individual level (i.e., the same person tends to answer in a consistent manner over time) are necessary to conduct a valid evaluation of effects. Library-use skills items were considered reliable if more than 50 percent and fewer than 90 percent of the students answered the items correctly. (The figures 50 percent and 90 percent were selected based on the authors' judgments that these were reason- able a priori cutoff points for overly easy and overly difficult items, respectively.) On the basis of this process of elimination of inconsistent attitudinal items and overly difficult and overly easy skill items, some six attitudinal and twenty skills items of strong reliability were selected from the draft evaluation instrument (see appendix for items used). The pretest instrument was then adminis- tered to 162 freshman students enrolled in several freshman English composition courses. There were 133 freshman test sub- jects; during the semester these students were given library-use instruction by refer- ence librarians . Twenty-nine students (two separate classes taught by the same instructor) were treated as a control group , which received no library instruction. At the time of the experiment, the students in the control group did not receive any library instruction that either the instructor or the authors were aware of. Eight weeks later in the semester all 162 students were again given the original test, hence allowing the comparison of the pre- test and posttest scores of the experimental and control subjects. THE EFFECfS OF INSTRUCfiON UPON SKILLS ACQUISffiON Table 1 reports the overall findings de- rived from the evaluation study with respect to skills acquisition. It should be noted that Library-Use Instruction I 313 TABLE 1 COMPARISON OF TEST/RETEST SCORES FOR CONTROL AND TEST CLASSES Control Classes N = 29 Test Classes N = 133 Pretest 12.4 .48 .2 Posttest 12.2 Pretest 12.2 .26 2.7 Posttest 14.9 Mean Score• Standard error of the mean difference Difference of means Statistical Summaryt t = .42 df = 28 Not Significant t = 10.30 df = 132 Significant at p < .001 *Scores refer to responses on a 20-item skills test. tThe t ratio test of the significance of differences of means for matched observations (test/retest format) is being used to evaluate the dilfere1:1ces observed . Since the direction of change is predicted, a one-tailed test is used to assess the level of confidence of the t scores. the control group students exhibited a very slight decline in their aggregate skills scores (from an average of 12.4 to 12.2 items an- swered correctly on a 20-item test), while the instructed students registered a mean aggregate gain of 2. 7 items on the retest. The t ratio test for matched observations was used to evaluate the degree of statistical significance of the difference in means for the test and control group, with the result that the amount of change in the control group proves insignificant while the change in the test group proves highly significant. The findings reported in table 1 indicate that the instruction received by students was effective in improving library informa- tion search skills. The difference of means between pre- and posttests for the test group indicate the significance of.effect, and the comparison of that difference with the results of the control group assures us that the difference observed in the test group was not an artifact of time or shared envi- ronment. The interpretation of the 14.9 posttest score can be made in relation to the gain possible on this test. From an average pre- test score of 12.2 the students could gain a possible 7.8 on a twenty-item test. A gain of 2.7 represents 34.6 percent of the total pos- sible gain of 7. 8 on a test where both the overly easy and the overly difficult items have been eliminated. The interpretation of the 14.9 score for instructed freshman can be even more meaningful when it is compared to some relevant reference group norm. To the end of establishing such a norm for interpreta- tive comparison, the graduating seniors of DePauw's 1977 class were surveyed. Ap- proximately half of the members of the class were selected at random, with 60 percent (ninety-five students) completing and return- ing questionnaires. The mean score for the seniors was 14.8, indicating that the library-use instruction can bring freshman students to the level of competence on gen- eral library skills of graduating seniors (who have four years of library-use experience!) within the period of a single semester and within the context of three brief sessions with librarians. In assessing the value of instruction pro- grams, it is very important to determine whether the effects of instruction are gener- alized or specific-that is, whether just some or all kinds of students derive benefit from instruction. The question must be raised whether library-use instruction is ap- propriate for all levels of students or whether it might be "too elementary" for the brightest students and/or "too difficult" for the slower students. In table 1 it is im- portant to note that the standard error of the mean difference (a measure of variation about the average amount of change evi- denced . by individuals) for the test students is almost half that of the control group-an indication that the effect of instruction was quite uniform among students instructed. Table 2 displays the data from the evalua- tion study in a way that the question of generalizability of .effects can be more di- rectly assessed. Table 2 investigates the degree of im- provement demonstrated by slower, aver- age, and brighter students-as classified by scores on the Scholastic Aptitude Test (S.A.T.). 18 It should be noted that in all three classifications the amount of im- - - ~----- - --- - ~-------------------. .. .-------------~ 314 I College & Research Libraries • july 1979 TABLE 2 COMPARISON OF TEST/RETEST SCORES ACROSS S.A.T. VERBAL LEVELS Mean Score (20-item scale) Standard error of the mean difference Difference of means Statistical Summary Low S.A.T. Verbal Level: Lower Than 50() • N = 39 Pretes t Postt est 11.6 14.6 .45 3.0 t = 6.67 df = 38 Significant at p < .OOlt •s .A.T. scores were not avai lable for 13 students. t Level of significance of o ne-tailed tes ts. provement registered is statistically sig- nificant to a high degree. (It also should be pointed out that this is not a test of any hypothesis concerning the effect of the S.A.T. on the impact of library-use instruction . Such a hypothesis regarding instruction ef- fects would involve a much different re- search design and the employment of very different statistical procedures. In this study the authors used S.A.T. scores only to as- sess the uniformity of learning rates. ) It is encouraging to observe that both the slower students (with S.A.T. verbal scores below 500) and the brighter students (with scores of 550+) register mean gains of 3. 0 items or better. The uniformity of positive results across the three categories of student aptitude demonstrates that all kinds of students stand to benefit from library-use instruction . THE EFFECfS OF INSTRUCfiON UPON STUDENT ATTITUDES Two types of attitudinal analysis must be accomplished in a systematic summative evaluation . First, the attitudes of subjects of instruction toward the instruction presented must be assessed; and , second, the degree . of attitude change of subjects toward the phenomenon or skill area of instruction must be measured . The first type of question will inform the evaluator of the degree to which subjects view instruction as beneficial-an important aspect of instruction given the fact that self-motivated learning tends to be retained longer than enforced mastery. 19 The second type of question will assess the degree of ef- fect instruction has upon relevant attitudes of impact-i.e. , attitudes that the instruc-. Medium S.A.T. Verbal Le vel: 500 to 549 Range N = 49 Pretest Posttes t 12.2 14 .6 .51 2.4 t = 4.71 df = 48 Significant at p < .001 High S.A.T. Verbal Level: 550 + Range N = 32 Pretes t Posttest 12.8 16.0 .43 3.2 t = 7.44 df = 31 Significant at p < .001 tion was designed to alter in some manner. Table 3 reports the results of a surve y of student participants in the library-use pro- gram. A fairly strong pattern of positive re- sponses is evidence d in each of the qu e s- tions asked. A clear majority of students in- structed in the program found instruction to be informative and useful , and only a small proportion of the students believed that the program was repetitious . The final area of evaluation is that of at- titude change resulting from instruction . It must be noted that relatively little overall attitude change was registered ; mean re- sponses on the six attitude que stions dif- fered only marginally as be tween pre- and posttest scores . Most interestingly, how- ever, although overall attitude change was not dramatic, attitude change was important for that group of students initially holding negative attitudes toward libraries . Table 4 displays the pattern of response typical of these students. It should be noted that, al- though the control students demonstrated no net change in their attitudes on the question of viewing. libraries as unexpec- tedly interesting places to be in, the test group students register a strongly positive pattern of change on this question. It appears that either attitudes about li- braries are far more difficult to influence or measure than are library-use skills, or the particular program at DePauw University ~ was less effective in influencing attitudes than it was in influencing library-use skills . It is understandable that three brief sessions with librarians might have little impact on the attitudes of students toward the library considering that a number of studies have concluded that four years of college experi- Library-Use Instruction I 315 TABLE 3 PARTICIPANT ATTITUDES TOWARD LIBRARY INSTRUCTION: POST-INSTRUCTION STUDENT IMPRESSION (N = 177)* Question: Were the classroom presentations on the Library Search Process. Useful? Informative? A Waste ofTime?t Enjoyable? Repeating What You Already Knew? Very Much So 5 15% 40% (27) (70) 25% 44% (44) (77) 1% 6% (2) (10) 4% 14% (7) (24) 6% 11% (11) (20) 2 40% 5% (70) (8) 27% 3% (47) (5) 18% 27% (32) (48) 49% 20% (86) (36) 37% 23% (65) (40) Not At All 1 0% (0) 1% (1) 43% (76) 19% (21) 21% (37) Not Sure & No Answer 1% (2) 2% (3) 5% (9) 2% (3) 2% (4) *These questions were part of a separate survey form completed by students involved in the program each semester. The results are used for formati ve evaluation, i.e., to modifY the program to make it more effective. tAttitude measurement scales must contain a mixture of negative and positive items to avoid the adverse effects of "response se(' phenomena. On the question of effects of survey structure upon attitude measurement see Seymour Sudman and Norman Bradburn , Response Effects in Sur~;eys: A Re~; iew and Synthesis (Chicago: Aldine Pub. Co. , 1974), p.28-00. ences have little effect on the attitudes and values held by many students. 20 The au- thors suggest that further research is indi- cated to determine skills-change versus attitude-change problems in library-use in- structional programs. TABLE 4 COMPARISON OF PRE- AND POST-INSTRUCTION RESPONSES OF TEST AND CONTROL RESPONDENTS WHO ARE LIKELY TO FIND LIBRARIES TO CONTAIN INTERESTING THINGS Question: When I go to a library I often spend more time than I planned because I find so many interesting things. Responses: Strongly Agree (5 points): Agree (4): Undecided (3): Disagree (2): Strongly Disagree (1): Comparison Group: All control and test group students who gave 1 and 2 responses on the pre- test survey Gains (toward agreeing responses)* No Change Losses Net Score Control Classes Test Classes N = 12 N = 67 +3 6 -3 0 +35 35 -11 +24 *Changes aggregated in scale units Test/retest scores are subject to "regression toward the mean" effects , particularly when extreme values are compared in a before and after framework. In this case, however, both the control and test groups are subject to the same regression effects (to the ex- tent they are present in this instance). On the problem of regres- sion toward the mean see Donald T. Campbell, "Reforms as Ex- periments," in James A. Caporaso and Leslie L. Roos , Jr.,- eds., Quasi-Experimental Approaches: Testing Theories and Ewluating Policy (Evanston : Northwestern Univ. Pr., 1973), p. l96-97. CoNCLUSIONs This article has sought to accomplish two goals: to provide a model of evaluation and its application, which may be of use to others interested in systematic assessment of instructional programs; and to present evidence of the utility of library-use instruc- tion. From the previous discussion, and from evidence collected in the DePauw University library-use instruction program, it would seem that library-use instruction is an important element in enhancing the role of the academic library in the educational process through promoting increased skill development and, to a more limited degree, positive attitudinal change on the part of students involved. The evaluation model described did an- swer the . questions as to whether the stu- dents increased · their general librar-y-use skills and whether students from different aptitude groups benefited in this respect from library-use instruction. The evaluation model was less successful in addressing questions relating to attitudinal changes re- sulting from such instruction. From the number of questions that proved unreliable on this part of the pretest questionnaire it may be surmised that the DePauw study instrument either requires revision, or it requires the employment of a supplementary methodology (such as docu- menting student behavior) to measure any 316 I College & Research Libraries • july 1979 attitudinal changes that may occur. This area proved to be much more complex and difficult to study than was supposed at the beginning of the study. The evaluation model used did indicate that library-use instruction is an activity that can prove its worth in a systematic assess- ment of its impact. The authors believe that the careful application of systematic assess- ment to library-use programs can be very important in obtaining adequate, long-term support for such programs from library and college or university administrators. Using such methods, it is possible to ob- tain useful information, such as the fact that freshman students can develop library-use skills equal to those of graduating seniors through relatively little instruction ; or, taken from another point of view, the li- brary skills of graduating seniors are no bet- ter than those of freshman students after brief instruction. These and related findings generated by evaluation studies can be very important in gaining needed administrative support for a library-use instruction pro- gram. The information provided in this article and other related information quantifying the results of the library-use instruction program at DePauw University proved help- ful in gaining administrative support for a successful grant proposal to the Council on Library Resources and the National En- dowment for the Humanities under their joint College Library Program. It is hoped that this example will encour- age others active in promoting library-use instruction programs to apply the principles of systematic assessment in evaluating the achievement of instructional objectives in their particular programs . REFERENCES 1. Edward A. Suchman, Evaluative Research (New York: Russell Sage Foundation , 1967), p.2. 2. Library-use instruction in academic libraries has a long history and has gone through sev- eral cycles of popularity dating back to before the tum of the century. For more complete information on the history of library-use in- struction see Kenneth Brough, Scholar's Workshop: Evolving Concepts of Library Service (Urbana, Ill.: Univ. of Illinois Pr., 1953) and Johnnie Givens, "The Use of Re- sources in the Learning Experience , " Ad~ vances in Librarianship 4:149-74 (1974). 3 . Scarvia B. Anderson and others, Encyclope- dia of Educational Evaluation (San Francisco: Jossey-Bass, 1973), p.406. 4. Ibid., p.177. 5. 20th Annual Report of the Council on Li- brary Resources , Inc. (Washington, D.C .: Council on Library Resour_ces, 1976), p .55. 6 . Patricia B. Knapp, "Suggested Program of College Instruction in the Use of the Li- brary," Library Quarterly 26:224-31 (July 1956). 7. Evan Farber, "Library Instruction Through- out the Curriculum: Earlham College Pro- gram," in John Lubans, ed., Educating the Library User (New York : Bowker, 1974), p .145-62. 8. Marvin Wiggins, "The Development of Li- brary Use Instruction Programs," College & Research Libraries 33:473-79 (Nov. 1972). 9 . Martha Hackman, "Proposal for a Program of Library Instruction, " Drexel Library Quar- terly 7:299-308 (July-Oct. 1971). 10. For an extensive bibliography on library-use instruction see Maureen Krier, "Biblio- graphic Instruction: A Checklist of the Litera- ture, 1931-1975," Reference Services Review 4:7-31 (Jan.-March 1976) . 11 . A full report of the activities carried out as a result of the Library Service Enhancement Program grant to DePauw University is avail- able from ERIC. See James Martindale and Larry Hardesty, Library Service Enhance- ment Program , DePauw University, Grant Proposal and Quarterly Reports, U.S. Edu- cational Resources Information Center, ERIC Document ED 145 839, March 1978. 12 . The authors recognize that there are a number of useful 'altematives to paper-and- pencil type tests in evaluating a program . For an· excellent discussion of these methods see Eugene Webb and others, Unobtrusive Measures: Nonreactive Research in the Social Sciences (Chicago : Rand McNally & Co., 1966). 13. Masse Bloomfield, "Testing for Library-Use Competence," in Lubans, ed., Educating the Library User, p.221-31. 14. The authors recognize that face validity is only one of several validity checks, and in part their judgment here was based on Earl R. Babbie, The Practice of Social Research (Belmont: Wadsworth, Inc., 1973), p.360. 15. For an excellent exposition of methodological concerns regarding systematic evaluation see Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimental De- signs for Research (Chicago: Rand McNally & Co., 1963), p.1-22. 16. For a discussion of the test/retest reliability technique see H. W. Smith, Strategies of Social Research: The Methodological Imagi- nation (Englewood Cliffs, New Jersey : Prentice-Hall, Inc., 1975), p.58--61. 17. For a discussion of the uses of Pearson corre- lation coefficients see Kenneth Bailey, Meth- ods of Social Research (New York: Free Press, 1978), p.341. The coefficient is a measure of association between two meas- ures; in this case, the more similar the re- sponse of individuals on pre- and posttest items the higher the measure of reliability of the item in question. 18. For a discussion of the Scholastic Aptitude Library-Use Instruction I 317 Test see Oscar Krisen Buros, ed., The Sev- enth Mental Measurements Yearbook (High- land Park, N.Y. : The Gryphon Press, 1972), p.646--50. 19. On motivation and retention of learning see Herbert J. Klaumeirer, Learning and Human Attitudes: Education Psychology (New York: Harper & Row, 1961), p .319--77. 20. For a somewhat dated but excellent overview of research findings concerning the impact of college experiences on the attitudes and val- ues of students see Paul Heist, "Student Characteristics: College and University," in Robert L. Ebel, ed., Encyclopedia of Educa- tional Research (4th ed., New York: Macmil- lan Co., 1969), p.l323-25. APPENDIX: LIBRARY-USE INSTRUCfiON EVALUATION INSTRUMENT Pre-Tested Attitude Items: Response Categories = Likert Scale-Strongly Agree, Agree, Undecided, Disagree and Strongly Disagree. 1. I find a library a very comfortable place to work when I need to go there. 2. Walking into a library is like going into church because I'm in awe of the surroundings. 3 . I only go to a library when someone makes me go. 4. When I go to a library, I often spend more time than I planned because I find so many interest- ing things . 5. A person should only ask a librarian for help when it looks as if they aren't busy. 6 . Normally a librarian can only help you when you know what you're looking for. Skills Test: Directions: In the following exercise read each item carefully. Decide which area of the library is the most logical place to start your search for the information described in the item. Respond to each item by placing the IDENTIFICATION NUMBER of your choice in the blank preceding it. Library Area: 1 Card Catalog 2 Index Area 5 Periodicals Reading Room 6 New York Times & Index ' 3 Reference Area 7 Government Documents 4 Rotary File of Periodical Holdings 8 Abstracts 1. Wilson, John Arthur, Modern Prac- 7 11. Book with a Superintendent of tice in uather Manufacturing Documents classification # 3/7 2. Census data on Putnam Co., In- 2 12. Find current information on the fad diana of tie-dyeing 5 3. A current Newsweek for browsing 7 13. Supreme Court decisions --- 5 4. You want to make a current com- 3 14. Birthdates for Albert Schweitzer and parison of the Indianapolis Star and Lawrence Welk the Chicago Tribune 6 15. Day-to-day coverage of the Kent 1 5. Games People Play State "incident" --- 7 6. Congressional debates on the Alaska 2 16. Magazine article on ESP pipeline 2 17. Review in a magazine of Alistair 3 7. Who's Who in the Humanities Cooke's America 4 8. A professor sends you to read an 1/3 18. Bibliography of resources on black article in May 1975 Society Americans 1 9. Watchmaking information to check 3 19. Ten longest bridges in the world out of the library ---3/7 20. Organization chart of the u.s. 4 10. Does the Library subscribe to Ms., Postal Service Ebony, or Time?