574 Assessment in the One-Shot Session: Using Pre- and Post-tests to Measure Innovative Instructional Strategies among First-Year Students Jacalyn E. Bryan and Elana Karshmer Jacalyn E. Bryan is Reference and Instructional Services Librarian and Assistant Professor and Elana Karshmer is Instruction Program and Information Literacy Librarian and Associate Professor at Cannon Memorial Library at Saint Leo University; e-mail: Jacalyn.Bryan@Saintleo.edu, Elana.Karshmer@Saintleo. edu. The authors would like to acknowledge the invaluable assistance provided by the following individuals: Dr. Jeffrey Anderson, Associate Vice President for Assessment and Institutional Research; Ms. Johanna Lane, Statistical Analyst for Assessment and Institutional Research; and Dr. Richard G. Bryan, Professor of Psychology, all of Saint Leo University. © 2013 Jacalyn E. Bryan and Elana Karshmer, Attribution- NonCommercial (http://creativecommons.org/licenses/by-nc/3.0/) CC BY-NC Many studies focus on the use of different assessment tools within in- formation literacy instruction; however, there are very few that discuss how pre- and post-tests can be used to gauge student learning, and even fewer of those published deal with pre- and post-test assessment within the one-shot paradigm. This study explores the effectiveness of using nonlinguistic representations—kinesthetic, graphic, and physical models—in one-shot library sessions for first-year students in SLU 100 Introduction to the University Experience. As hypothesized, the findings suggest that the use of such representations can enhance student learn- ing and assist in developing research skills that are essential to acquiring information literacy. eveloping effective methods of teaching information liter- acy skills to first-year students is a continuing project; while many methods have been proposed, there is not one sure-fire technique that ensures students learn and integrate the skills they need to locate, evaluate, and use information effectively. Assessment is an important element of instructional design that enables librarians to gauge what students are learning and provides information that can be used in designing more effective lessons. Pre- and post-tests are especially useful in that they can demonstrate the degree to which spe- cific instructional strategies affect student learning. This article will discuss the use of pre- and post-tests within one-shot sessions designed to introduce first-year students to basic elements of informa- tion literacy. In addition, we attempted to ascertain the usefulness of effective instructional strategies identified by Mid- continent Research for Education and Learning (McREL) Institute. crl12-369 Assessment in the One-Shot Session 575 Background At Saint Leo University, all first-year students are enrolled in SLU 100: Intro- duction to the University Experience, which “provides a framework of effective academic and personal strategies to help the student succeed both in and out of the classroom.”1 There are approximately 25 sections of SLU 100 offered each fall, and each section has about 20 students. Each section of SLU 100 visits the library for one class period to learn about basic library resources and research skills. In the past, these library instruction sessions were taught by a single librarian selected from a small group of librarian teaching faculty based on availability. There was little uniformity, since each librarian tended to focus on different library skills and information resources. In addition to revising session content to achieve unifor- mity, we believed that it was important to increase student engagement in the ses- sions. Although previous iterations of the SLU 100 library instruction session might have included elements of active learning, these elements were not applied consis- tently. After reviewing best practices in library instruction and considering the issues that prompted the redesign of the SLU 100 library session—namely session uniformity, inclusion of active learning experiences, and effective use of available librarian teaching faculty— we developed a model that incorporated the ACRL In- formation Literacy Competency Standards for Higher Education, the McREL instruc- tional strategies for effective teaching, and Gilchrist’s “assessment as learning” framework. Gilchrist’s “assessment as learning” framework requires that instruction be designed around specific measurable outcomes that then guide the develop- ment of the curriculum and pedagogy that is used. Once these elements are in place, the instructor must create an assessment tool that enables students to demonstrate what they have learned, which is then evaluated by the instructor based on criteria that indicate whether the students have successfully demonstrated what the session was designed to teach. This method includes ongoing opportu- nities for revision to the curriculum and pedagogies incorporated to maximize student learning. For example, if an out- come states that students will learn to effectively navigate the library homepage, the curriculum would include lessons on how to access the library homepage and what is available under each link. The pedagogy might include both a video and live demonstration of these techniques, as well as an activity that asks students to practice these skills. We might collect students’ worksheets to assess student learning, and one criterion for evaluating their performance could entail having them answer related questions during the Library Jeopardy game.2 In 1998, researchers at McREL con- ducted a meta-analysis of research studies on instructional techniques that could be employed by K–12 classroom teachers. They identified nine instructional strate- gies that were likely to enhance student learning in all subject areas and all grade levels.3 These nine instructional strategies were integrated into the lesson plan for the SLU 100 library session as follows: • Identifying similarities and dif- ferences Example: Students were asked to compare PDF and HTML formats for journal articles. • Summarizing and note taking Example: Students completed a worksheet while watching the presession videos. • Reinforcing effort and providing recognition Examples: Librarian teaching faculty provided specific, con- tingent recognition and praise during the group library activity and the Library Jeopardy game; token prizes were awarded to the winning team of the Library Jeopardy game. • Homework and practice Examples: The group library 576 College & Research Libraries November 2013 activity provided an opportunity for practice on library concepts as students completed a work- sheet; SLU 100 course instructors gave their students additional assignments that required the use of library resources. • Nonlinguistic representations Examples: 1) Graphic image: CAARPy (a standup cardboard figure of a carp fish, an acronym representing currency, authority, accuracy, relevance, and pur- pose, used in evaluating web- sites; 2) Physical model: Catalog Box (a box containing physical examples of various types of resources available in the library catalog); 3) Kinesthetic activity: Boolean search terms (students were asked to “stand up” if they corresponded to the follow- ing conditions: brown hair OR brown eyes; brown hair AND brown eyes: brown hair AND brown eyes, but NOT wearing flip flops.) • Cooperative learning Examples: Students engaged in cooperative learning during the group library activity that required them to form teams and complete a worksheet using li- brary resources; the same teams then worked together during Library Jeopardy. • Setting objectives and providing feedback Examples: Objectives for the SLU 100 library session were stated at the beginning of class and were also included on the student evaluation form. Librarian teaching faculty also provided concurrent, specific feedback during the group library activity and the Library Jeopardy game. • Generating and testing hypoth- eses Example: Students were asked to predict whether the number of search results would be larger or smaller based on the use of specific Boolean terms. • Cues, Questions, Advance Orga- nizers Examples: The presession videos and the accompanying worksheet served as an advance organizer for the SLU 100 library session. Questions were includ- ed on the group library activity worksheet and in the Library Jeopardy game where the ques- tions were ordered according to Bloom’s taxonomy.4 This new model for the SLU 100 library session was implemented in the fall of 2009, using basic student evaluation forms to gather information on student learning. The evaluation form asked students to rate the videos, activities, their own skills, and the usefulness of the library session. However, based on our experience in teaching the SLU 100 library sessions and a review of those evaluation forms, we found that our assessment tool was actually only measuring student “per- ceptions” of what they had learned rather than actually measuring possible improvements in their library skills. We determined that the best method for mea- suring these skills would be to implement a pre- and post-test to gather baseline data regarding students’ library skills prior to library instruction and then to test those skills again after taking part in the rede- signed library session. Literature Review Although the literature is replete with examples of the ways in which assess- ment can be used to develop and improve library and information literacy instruc- tion—both for programmatic and ac- creditation purposes—there is very little written on the results of the assessment of one-shot library sessions. While the les- sons learned by librarians engaged in the assessment of semester-long courses or re- curring instruction sessions can be useful Assessment in the One-Shot Session 577 for librarians teaching one-shot classes, the differences between these types of sessions (such as length of session or librarian’s input over course content) can make it difficult for librarians involved in one-shot instruction to meaningfully adapt suggested lesson plans, activities, and assessment tools to their own needs. A review of the existing literature re- lated to assessment yields a wide range of results; however, few of the articles available discuss the process of assess- ment in terms of a pre- and post-test approach, and even fewer mention the assessment process within a one-shot in- struction framework. Thus, in reviewing the available literature, we found it most useful to focus on articles that discussed either a) one-shot instruction strategies, or b) the use of pre- and post-tests in any form of library instruction that took place in discrete sessions (that is, in library instruction sessions that were not part of a specific information literacy and/or library research skills course). Of the articles that discussed one-shot instruction, two emerged as especially useful for developing the framework for our study: Portmann and Roush’s “Assess- ing the Effects of Library Instruction” and Choinski and Emanuel’s “The One-Minute Paper and the One-Hour Class.” Measur- ing the influence of a 50-minute “library training/orientation session” on students’ library usage and their development of li- brary skills, Portmann and Roush’s results indicate that their one-shot instruction ses- sion seemed to increase students’ library usage.5 At the same time, their results sug- gest the use of a one-shot library instruc- tion did not have a statistically significant effect on students’ library skills. They also note, however, that their findings did not corroborate past experiments and that this was perhaps the result of several factors including the lack of a clear outline detail- ing the session curriculum for students, a smaller than desired sample size, less- than-ideal student participation in the experiment, and the absence of a valid and reliable data-collection instrument. The authors’ evaluation of their methodology and discussion of the concerns raised by their subsequent results offer invaluable suggestions for researchers embarking on similar projects; their focus on one-shots and willingness to critique their findings objectively and in light of previous re- search makes this article a must both for study design and evaluation after the fact.6 In their attempt to develop and use an assessment tool that could be deployed in a one-shot instructional scenario, Choinski and Emanuel considered sev- eral factors that ultimately led them to use a “one-minute paper.”7 In develop- ing their assessment tool, the authors explained that they needed an instru- ment that was objective and quantita- tive, flexible, easy to use and evaluate, and that was both relevant to the ACRL information literacy standards and use- ful for accreditation purposes. They felt that the one-minute paper, if properly focused, could be used to collect objec- tive and quantitative information. To ensure that the information they got was as quantitative and objective as possible, Choinski and Emanuel asked students to respond to specific questions that they felt had clear-cut answers.8 However, it is not entirely clear that they accomplished their goal of collecting data that were truly objective and fully quantifiable. For the purposes of the present study, their advocacy of the importance of collect- ing objective and quantitative data was crucial to and informed the development of our pre- and post-test. In addition, Choinski and Emanuel’s article is espe- cially useful for librarians developing a one-shot session because of its focus on outcomes-based assessment within a single-session model. While the bulk of the article discusses the deployment and evaluation of the one-minute paper, the many skills the authors were able to as- sess by using this tool suggest that it may be a fruitful model for future research. We extended this outcomes assessment emphasis to include a pre-test measure as well as the post-test measurement. 578 College & Research Libraries November 2013 Given the current popularity of as- sessment as a necessary component of instruction, it is surprising that so few articles focus on the use of pre- and post-tests as a means by which student learning can be examined. In her article “Closing the Assessment Loop Using Pre- and Post-Test Assessment,” Swoger focuses specifically on the use of pre- and post-tests in the development of a one- shot library session that prepares students for the research-related tasks they will be expected to complete during their participation in a first-year writing and critical thinking course.9 Swoger explains the process of revision that she and her colleagues embarked upon in redesigning their instructional approach. By including assessment in the planning stages of the course redesign, the librarians involved in the project were able to prioritize the knowledge and skills they wanted their students to learn by developing specific activities keyed to their chosen outcomes. The pre- and post-test approach allowed her team to gather baseline data regard- ing what students knew coming into their sessions and then to compare that knowledge to the skills they acquired dur- ing the instruction period.10 The pre-test and post-test were designed around a set of goals that Swoger and her colleagues determined were especially important for first-year students and focused on basic skills like identifying peer-reviewed resources, using a database to access an article on a given subject, and knowing where to locate information on citing sources, finding assistance, and the like.11 The pre- and post-test that Swoger and her colleagues developed included a variety of question types including short answer and multiple-choice, and were considered “open book” in format; that is, students were allowed to use any resource available in the library while completing both the pre- and post-tests. In evaluat- ing the data, she found that some of the questions included on the assessment tool were confusing and indicated that certain resources within the library were not as easy to access as librarians believed. Overall, however, the use of a pre-test and post-test enabled Swoger and other librar- ians at her institution to reflect on their instructional practices, revising goals and objectives, as well as teaching strategies to better prepare their students.12 Aim and Scope As an assessment tool, the use of pre-tests enables researchers to establish a base- line level of knowledge and determine, by comparison to the post-test results, whether the instructional design pro- duced the desired results. Our pre- and post-test comparison had two objectives. First, we wanted to compare the overall level of library skills before and after receiving library instruction. Second, we wanted to determine if the use of McREL strategies, specifically the integration of nonlinguistic representations into the lesson, was effective in enhancing student learning. Nonlinguistic representations include graphic images, physical models, and kinesthetic activities used as tools to help students retain information.13 We predicted that students who received library instruction using these nonlin- guistic representations would better internalize the information presented and receive higher scores on the post-test as compared to students who received only linguistic instruction. Overall Design The overall design of the study involved the following phases: 1) In the SLU 100 class meeting immediately prior to the library visit, a pre-test of basic library con- cepts was administered to students. Fol- lowing the pre-test, the students viewed presession videos on library concepts and skills. 2) On the day of the scheduled library visit, students received instruction in library concepts and skills. These library instruction sessions were always provided by the same two librarian teaching faculty. 3) In the next SLU 100 class meeting fol- lowing the library visit, the post-test was administered to students. Assessment in the One-Shot Session 579 Methodology Before conducting this study, we submit- ted an application to the Institutional Review Board at Saint Leo University for approval, which was granted. The SLU 100 course is taught by a wide variety of faculty, administrators, and staff at Saint Leo University who will hereafter be referred to as “SLU 100 course instructors.” As part of the course, each section of SLU 100 visits the library for one class period to receive instruc- tion on library concepts and skills. In the class meeting immediately prior to the library visit, the SLU 100 course instruc- tors were asked to read and distribute the “Consent to Participate in Research” form to their students. The form included a description of the study and explained that the information gathered would be kept confidential. The form also stated that the students needed to be 18 years of age or older to participate in the study and that participation was voluntary. (Students who were not yet 18 years old were permitted to complete the pre- and post-tests, but the results were destroyed and not included in the study.) The SLU 100 course instructors would then admin- ister the pre-test to their students. After the pre-test was completed and collected, the class would watch a series of library videos that introduced the main concepts to be covered in the library session. The completed pre-tests were brought to the library by the SLU 100 course instructors on the day of the library visit, where they were logged and coded for the purposes of confidentiality. In preparation for the library visit, the 27 sections of SLU 100 were randomly divided and labeled for convenience as “nonvisual” (control group) and “visual” (experimental group). The nonvisual/ control group received library instruction that did not include the supplemental nonlinguistic representations, while the visual/experimental group received li- brary instruction that was supplemented by the use of the nonlinguistic representa- tions. The nonlinguistic representations included: 1) a physical demonstration of the “catalog box,” a box that contained physical examples of the types of library resources found in the catalog (such as books, ebooks, periodicals, and media); 2) a kinesthetic exercise to demonstrate the Boolean terms AND, OR, and NOT (for instance, stand up if you have: a) brown hair OR brown eyes; b) brown hair AND brown eyes; c) brown hair AND brown eyes, but you are NOT wearing flip flops); and 3) a visual model of CAARPy (a standup cardboard figure in the form of a carp fish) to explain how to evalu- ate Internet resources according to the acronym CAARP: currency, authority, accuracy, relevance, and purpose. The library instruction sessions were team-taught by the same two librar- ian teaching faculty who were also the principal investigators of this study and who will hereafter be referred to as “Librarian A and Librarian B.” As team teachers, Librarians A and B each taught specific parts of the library instruction sessions, and this remained consistent throughout the study for both the experi- mental (visual) and control (nonvisual) groups (that is to say, the same librarian taught the same concepts to both the control and experimental groups using Figure 1 CAArPy 580 College & Research Libraries November 2013 the nonlinguistic representations for the latter). Librarian A covered the location of resources and services in the library, the library homepage, catalog searching (catalog box for the visual group), and database searching. Librarian B covered searching hints, which included the use of quotation marks, Boolean searching (kinesthetic activity for visual group), and evaluating websites (CAARPy for the visual group). Librarians A and B both supervised the group library activity where students used library resources to answer questions on a worksheet. Librar- ian A then reviewed the answers for the group library activity and Librarian B concluded the session by conducting the Library Jeopardy game. During the next SLU 100 class meeting following the library visit, the SLU 100 course instructors administered the post- test and returned the completed forms to the library where they were logged and coded. After teaching the first five library sessions for SLU 100, Librarians A and B noticed that the number of completed pre- and post-tests received was rela- tively low compared to the number of students in each section of SLU 100. In some cases, the SLU 100 course instruc- tors simply forgot to administer the pre- or post-tests. In other cases, it was unclear why the pre- and post-tests were not being completed to a greater degree. At that point, the decision was made to have the Librarians A and B visit the SLU 100 classrooms in person to improve the level of participation and administer the tests. Thereafter, during the class meet- ing prior to the library visit, either Librar- ian A or B (depending on their schedules) went to the designated SLU 100 class- room to brief the students, distribute the consent form, and administer the pre- test. During the class meeting following the library visit, usually within a period of two to five days, either Librarian A or B visited the SLU 100 classroom again to administer the post- test. A typical schedule for a SLU 100 class that meets on Mondays, Wednesdays, and Fridays would be as follows: 1) Monday—Librar- ian A or B goes to the SLU 100 classroom and administers the pre-test and then the SLU 100 course instructor shows the presession videos; 2) Wednesday—SLU 100 course instructor brings his/her class to the library and Librarians A and B conduct the library instruction session; 3) Friday—Librarian A or B goes to the SLU 100 classroom to administer the post-test. This modified data collection strategy was successful and resulted in a greatly improved response rate. In some instances, however, students completed the pre-test and not the post-test or vice versa; these test results were eliminated from the study. Findings The pre- and post-tests included the same eight multiple choice questions that pertained to various areas of information literacy and fundamental library skills. The topics addressed in these test items were those considered to be essential to basic library research and navigation of the library for freshmen students and were related to the learning outcomes for the library session (see Appendix A). Questions four through eight were based on concepts presented identically in both the experimental and control groups. However, questions 1 through 3 were based on concepts presented differently to the experimental group, with the ex- perimental group’s presentation based on the nonlinguistic representations as described above. The data from the pre- and post-tests were analyzed by the Office of Assess- ment and Institutional Research at Saint Leo University using SPSS (Statistical Package for the Social Sciences) software, and the statistical analysis involved both independent t-tests and paired samples t-tests. Overall there was a statistically signifi- cant improvement between the pre- and post-test scores following the library training in both the control and experi- Assessment in the One-Shot Session 581 mental groups. On a scale of 0–8, the over- all mean score for the nonvisual/control group went from 3.76 on the pre-test to 5.03 on the post test, resulting in a 34 percent increase. The visual/experimental group performed even better, with an overall mean score of 3.75 on the pre-test and 5.24 on the post-test, with a 40 percent increase (see figure 2). To determine if the visual training (the nonlinguistic representations) was more effective than the nonvisual training, the post-test scores comparing the visual and nonvisual groups were examined. A comparison was made between the overall mean scores of the control and experimental groups on the first three questions of the post-test. This allowed us to compare the use (or nonuse) of supplemental nonlinguistic representa- tions in the library orientation sessions. On a scale of 0–3, the aggregated mean score for the nonvisual/control group was 1.15, as compared to the visual/ experimental group with an aggregated mean score of 1.34. The post-test score seen for the visual/experimental group was higher than the post-test score for the nonvisual/control, and this differ- ence was statistically significant. This finding indicates that the use of non- linguistic representations resulted in better learning of the concepts involved (see figure 3). These data were further analyzed by comparing how much information each individual “gained” between pre-test and post-test about the concepts tested in questions 1 through 3. The average gain for the nonvisual/control group was only .06, while the average for the visual/experimental group receiving the nonlinguistic representations was .33 (see figure 4). Figure 2 Overall Mean Scores: Compares Pre to Post-Test Scores for Non-Visual and Visual Training Not Visual Visual PRE 3.76 3.75 POST 5.03 5.24 3.76 3.75 5.03 5.24 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 Figure 3 Overall Mean Scores: Compares Post-Test Scores on 3 items for Non-Visual and Visual Training POST Not Visual 1.15 Visual 1.34 1.15 1.34 0.00 0.50 1.00 1.50 2.00 2.50 3.00 Not Visual Visual Figure 4 gains in Learning (3 items) from Pre-Test to Post-Test for Non-Visual and Visual Training Not Visual Visual Series 1 0.06 0.33 0.06 0.33 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 582 College & Research Libraries November 2013 In examining the results for the in- dividual test items, the mean scores for each item were based on “1” for a correct answer and “0” for an incorrect answer. The mean is shown for each test item be- fore and after the library training on the pre- and post-tests, as well as the gain or loss and the percent of change. The results were separated into two tables represent- ing the “nonvisual” (control group) and the “visual” (experimental group). As stated above, questions 1 through 3 directly involved the experimental treat- ment (the use or nonuse of nonlinguistic representations). We predicted that the visual/experimental group would per- form better than the nonvisual/control group on these questions, which was demonstrated by the above data analysis. Question 1 asked students to select an item that is not included in the library catalog from the list provided. The non- visual group received a lower mean score on the post-test with a –29 percent change, while the mean score for the visual group improved with a 160 percent change. This was likely due to the fact that the visual/ experimental group was exposed to the “catalog box” demonstration while the nonvisual/control group was simply read a “list” of items contained in the library catalog. Question 2 required the students to select the Boolean term that would lead to the greatest number of results when performing a search in a database. In this case, both groups showed improvement on the post-test with a 35 percent change for the nonvisual/control group and a 100 percent change for the visual/experimen- tal group. While Boolean search strategies were discussed with both groups, only the latter group engaged in a physical/ kinesthetic exercise related to this con- cept, which probably accounted for their superior performance on the post-test. Question 3 asked students to choose the least important element when evaluat- ing information on websites. The visual/ experimental group was exposed to the “CAARPy” visual model, while the non- visual/control group received only verbal instruction. The mean score for both groups on this question was relatively TABLe 1 Non-Visual/Control group: gains by item Pre- and Post-Test Scores Non-Visual Q1 Mean Q2 Mean Q3 Mean Q4 Mean Q5 Mean Q6 Mean Q7 Mean Q8 Mean Pre-Test 0.07 0.17 0.84 0.35 0.77 0.41 0.4 0.76 Post-Test 0.05 0.23 0.87 0.54 0.81 0.74 0.86 0.93 Gain/Loss -0.02 0.06 0.03 0.19 0.04 0.33 0.46 0.17 % Change -29% 35% 4% 54% 5% 80% 115% 22% TABLe 2 Visual/experimental group: gains by item Pre- and Post-Test Scores Visual Q1 Mean Q2 Mean Q3 Mean Q4 Mean Q5 Mean Q6 Mean Q7 Mean Q8 Mean Pre-Test 0.05 0.15 0.79 0.39 0.82 0.5 0.23 0.81 Post-Test 0.13 0.3 0.91 0.64 0.81 0.7 0.81 0.94 Gain/Loss 0.08 0.15 0.12 0.25 -0.01 0.2 0.58 0.13 % Change 160% 100% 15% 64% -1% 40% 252% 16% Assessment in the One-Shot Session 583 high on the pre- and post-tests with little percent of change; the nonvisual group went from a mean score of 0.84 to 0.87 (4% change), while the visual group went from 0.79 to a 0.91 (15% change). It is like- ly that the students had prior knowledge regarding this question, which might explain the high scores. Even so, it should be noted that it was again the visual/ex- perimental group that demonstrated the greatest improvement. Of the remaining questions (numbers 4 through 8), the library instruction only used the linguistic representations; therefore, the presentations did not differ between the experimental and control groups. Because of this, data from these questions was collapsed across the ex- perimental and control conditions. While some improvement between pre- and post-test scores occurred in all condi- tions, the amount of improvement varied substantially depending on the particular question. Question 4 asked students what was needed to access library databases from outside the library. Substantial im- provement (59%) was demonstrated. Question 6 asked students to select the definition of a full-text article. Improve- ment here was also substantial (57%). The greatest improvement (163%) was seen on question 7, which asked students the meaning of the term “peer-reviewed article.” Two of the questions showed only mi- nor improvement between the pre- and post-test scores. Question 8 asked the students to which location in the library they would go to check out books. Only 19 percent improvement was seen here. The final question, number 5, provided a citation for a journal article and asked students to identify the title of the article. Essentially, only negligible improvement (1%) was seen between the pre- and post- test scores for this question. It should be noted that pre-test performance was quite high for both of these questions with a .79 (on a scale of 0–1) for question 8 and a .80 for question 5, leaving limited room for improvement; that is, a “ceil- ing effect” may have occurred. (Pre-test performance was not nearly so high for the other questions where improvement was substantial: question 4 = .37; ques- tion 6 = .46; and question 7 = .32). An alternative interpretation of the minimal improvement on these two questions, of course, may be that the nonlinguistic representation strategy was not applied to these questions. If it had been applied, we may have seen additional improvement. Limitations Several limitations to this study were not- ed both while conducting the study and during the data analysis. One weakness that had to be corrected during the study was mentioned earlier. It became clear relatively early that SLU 100 instructors were not following pre-test procedures and, to some extent, post-test procedures as originally designed. It was therefore necessary for the principal investiga- tors to go to the SLU 100 classrooms and administer the pre- and post-tests themselves. This variation in procedures undoubtedly led to variation in the data, which may have reduced the strength of the results; yet, even with this weakness, statistical significance was still achieved. A second limitation of the study was that it was not possible to randomly assign students within a class to the experimental versus control conditions; therefore, whole classes were randomly assigned to either visual/experimental or nonvisual/control conditions. Although this is not the most desirable means of randomization to ensure initial equiva- lence between experimental and control groups, there was some evidence to suggest that such equivalence was in- deed achieved. For example, it might be suggested that the substantially greater improvement on questions 1 through 3 in the experimental group (+.35) may have simply been due to having more motivated, attentive students within that group as compared to the control group (+.07 improvement) based on a lack of adequate random assignment to groups. 584 College & Research Libraries November 2013 receiving the nonlinguistic treatment. While the visual/experimental group showed significantly better improvement than the nonvisual/control group on these two questions, their performance was still, in an absolute sense, quite poor. For question 1, the post-test mean score for the visual/experimental group was only .13 (on a scale of 0–1) and for question 2, the post-test mean score was only .30. This is clearly a less than satisfactory level of performance and leaves room for further improvement. One possible explanation of these results was that these concepts (library catalog and Boolean search terms) are so foreign to first-year students that expanded instruction target- ing these concepts is warranted. Evidence for this interpretation can be seen in the extremely low pre-test scores for question 1 (.06 for the combined experimental and control groups) and question 2 (.16 for the combined experimental and control groups). This indicated a far lower initial understanding of the concepts in ques- tions 1 and 2, as compared to the concepts in questions 3 through 8. Conclusion In general, the results of this study con- firmed our initial predictions. Overall, the library instruction sessions had a signifi- cant positive effect on the students’ library skills in both the visual/experimental and nonvisual/control groups. Furthermore, the use of nonlinguistic representa- tions (graphic images, physical models, and kinesthetic activities) significantly improved performance in the visual/ experimental group as compared to the nonvisual/control group, which relied solely on linguistic presentations. Further consideration of the results suggested that they were not likely to be due to design flaws resulting in nonequivalent groups and that these results were strong enough to be evident even when some variation in procedures probably contributed to addi- tional variation within the data. Even so, to reduce variation in the future, pre- and post-test data collection should be stan- However, that interpretation is negated by the performance of the experimental group compared to the control group on questions 4 through 8 (based on the con- cepts that were taught identically to both the experimental and control groups), in which the control group (+1.19) actually performed slightly better than the ex- perimental group (+1.15). This basically equivalent performance on questions 4 through 8 in the experimental and control groups suggests that the two groups were basically equivalent to start with in regard to such subject variables as motivation, intelligence, and attentiveness. Another limitation was the fact that “Librarians A and B” were the principal investigators of this study as well as the librarians responsible for teaching the library instruction sessions to both the experimental (visual) and control (non- visual) groups. While this had the benefit of keeping constant the individuals who taught both the experimental and control groups, it leaves the possibility open for some bias to have influenced the results. Although every effort was made to avoid this, it remains possible that Librarians A and B may have inadvertently taught the experimental group in a manner different from the way they taught the control group: for example, with more enthusiasm or energy. While it would have been desirable for other librarians who were not aware of the parameters of this study to teach the SLU 100 library instruction sessions, it simply was not possible due to the staffing limitations of a small university library. Also, though it does not entirely rule out the possibility of such bias, the results mentioned above, in which the control group actually out- performed the experimental group on questions 4 through 8, argues against the suggestion that the librarians taught the experimental classes in an overall more enthusiastic and effective manner. Probably the greatest limitation noted in our results was the relatively poor post- test performance on questions 1 and 2, even with the visual/experimental group Assessment in the One-Shot Session 585 dardized so that time lapses between the library session and post-test completion are minimized. This should ensure even stronger results. It was noted that, for some library concepts that were already well under- stood by the first-year students, a “ceiling effect” prevented robust indications of improvement using either linguistic or nonlinguistic techniques, suggesting that further studies focus mainly on concepts that are not as familiar to these students. On the other hand, when the students’ baseline knowledge is minimal, it seems likely that expanded instruction will be critical to achieving satisfactory levels of final comprehension. Perhaps, in these cases, including a graded assignment in the SLU 100 course linked to the skills acquired during the library instruction session would motivate students to pay greater attention and would provide them with an opportunity to practice their newly acquired skills. One additional consideration has recently arisen. With the increased num- ber of international students attending Saint Leo University, SLU 100 now has several course sections designated solely for international students. It might be important to measure whether or not the use of nonlinguistic representations is a particularly effective instructional strategy for this special population, given that the majority of these students speak English as a second language. With these suggested improvements and continued use of the pre-test/post-test assessment strategy to test their effectiveness, we hope to see enhanced learning in students’ acquisition of information literacy skills in the future. APPENDIX A SLU 100 Pre- and Post-test 1. Which of the following items is not included in the library catalog? a. Videos b. eBooks c. Magazine/journal articles d. Table of contents for books 2. Which of the searches listed below should give you the greatest number of results when searching in a database? a. Civil Rights AND United States b. Civil Rights OR United States c. Civil Rights NOT United States 3. Which of the following is not one of the criteria for evaluating websites? a. Website appearance b. Author’s credentials c. Currency of information d. Purpose of information 4. To access library databases from outside the library, which of the following must you have? a. A Saint Leo username and password b. A computer with Internet access c. A Saint Leo ID card d. All of the above e. Options A and B 586 College & Research Libraries November 2013 Notes 1. Saint Leo University Undergraduate Academic Catalog (2011–2012). 2. Debra Gilchrist, “Assessment as Learning” (session presented at the Association of College and Research Libraries Institute on Information Literacy Program-Track Immersion, St. Petersburg, Florida, July 26–30, 2009). 3. Robert J. Marzano, Debra J. Pickering, and Jane E. Pollock, Classroom Instruction that Works: Research-Based Strategies for Increasing Student Achievement (Alexandria, Va.: Association for Su- pervision and Curriculum Development, 2001), 4–7. 4. Elana Karshmer and Jacalyn E. Bryan, “Building a First-Year Information Literacy Experi- ence: Integrating Best Practices in Education and ACRL IL Competency Standards for Higher Education,” Journal of Academic Librarianship 37 (May 2011): 255–66. 5. Chris A. Portmann and Adrienne Julius Roush, “Assessing the Effects of Library Instruc- tion,” Journal of Academic Librarianship 30 (Nov. 2004): 461–65. 6. Ibid., 461–63. 7. Elizabeth Choinski and Michelle Emanuel, “The One-Minute Paper and the One-Hour Class: Outcomes Assessment for One-Shot Library Instruction,” Reference Services Review 34 (2005): 148–55. 8. Ibid., 153–54. 9. Bonnie J.M. Swoger, “Closing the Assessment Loop Using Pre- and Post-Assessment,” Reference Services Review 39 (2011): 244–59. 10. Ibid., 247–48. 11. Ibid., 244. 12. Ibid., 249–52. 13. Kathy Brabec, Kimberly Fisher, and Howard Pitler, “Building Better Instruction: How Technology Supports Nine Research-Proven Instructional Strategies,” Learning & Leading with Technology 31 (2004): 6–11. 5. Look at the following citation example. What best identifies the item in BOLD text? Glaser, Karen. “Underwater: Landscapes of Primordial Worlds.” Orion May/June (2010): 24-33. Print. a. Journal Title b. Article Title c. Series Title d. Publisher 6. When searching for an article in a database, what does the term “full-text article” mean? a. The whole article is available in a print journal or magazine b. The full abstract of an article is available in the database c. The entire text of the article is available electronically 7. If your professor requires you to use “peer-reviewed” articles for a research paper, this means that the articles have been: a. Reviewed by other students in the class b. Chosen by the professor c. Quoted in the textbook d. Reviewed by experts in the field 8. Where in the library do you go to check out books? a. Circulation desk b. Reference desk c. Technical services d. Librarian’s office