mar09b.indd Peter Hernon and Robert E. Dugan Assessment and evaluation What do the terms really mean? In various settings, academic librarians are discussing assessment and eager to play a leadership role in their organizations and institutions. Much of what they present as assessment in fact appears to be evaluation. Complicating matters, many defi nitions of assessment actually incorporate the word evaluate. Assessment is a type of evaluation that gathers evidence perhaps from the applica­ tion of evaluation research. The purpose of assessment or evaluation is either account­ ability or service/program improvement. Both terms actually portray accountability and improvement differently. Assessment Stakeholders interested in higher education, such as national, regional, and program ac­ crediting organizations, defi ne institutional effectiveness as the ability of an institution to meet its stated mission and “to engage a campus community collectively in a system­ atic and continuing process to create shared learning goals and to enhance learning.”1 They view such effectiveness in terms of the “contribution that each of the institution’s programs and services makes toward achiev­ ing the goals of the institution as a whole.”2 They see institutional effectiveness as creat­ ing a culture that addresses: • infrastructure support (e.g., suffi cient human, physical, and fi nancial resources to support educational programs and to facilitate student achievement of learning goals); • output (how much is accomplished); • student outcomes (public account­ ability); and • student learning outcomes (improve­ ments of academic quality). Student outcomes comprise metrics that characterize effectiveness as a function of, for instance, graduation, persistence, reten­ tion, admissions yield, and employment rates. Student outcomes are also concerned with issues of affordability. Such metrics might be used for drawing comparisons (benchmarking) with peer institutions. Stu­ dent learning outcomes, which represent a type of impact assessment, apply to learn­ ing and are statements of what students are expected to know and be able to do by the time they graduate. Further, they provide a basis to view and to improve learning. Among the four areas of effectiveness, accrediting organizations view student learning outcomes as the most important. This does not mean that the other areas are unimportant. Other stakeholders might have different priorities. Assessment centers on learning goals put forth at the course, program, discipline, divi­ sion, or institutional level. In their courses, faculty should observe and measure as feasible student learning and try to enrich that experience.3 Accrediting organizations and some other stakeholders, on the other hand, are most interested in the program or institutional levels. Assessment determines how well programs accomplish their educa­ tional mission, demonstrate learning over the Peter Hernon is professor at the Simmons College Graduate School of Library and Information Science, e- mail: peter.hernon@simmons.edu, and Robert E. Dugan is director of Sawyer Library at Suffolk University, e-mail: rdugan@suff olk.edu © 2009 Peter Hernon and Robert E. Dugan 146C&RL News March 2009 mailto:peter.hernon@simmons.edu duration of a program, and use the evidence gathered to improve teaching and learning. Learning is defi ned as “not only knowledge leading to understanding but also abilities, habits of mind, ways of knowing, attitudes, values, and other dispositions that an insti­ tution and its programs and services assert they develop.”4 Those engaged in teaching students—be they teaching faculty or librarians—need to agree on learning goals for all students in a program, how to gather relevant evidence of learning, and apply that evidence to any part of the program in need of improvement. Assessment is therefore based on feedback that students provide that can be used to analyze and correct a program’s educational planning and execution. That feedback might be based on arti­ facts that enable a program to see actual evidence of learning (direct methods) or on perceptions about what students think they learned and about the expectations (indirect methods). The Middle States Commission on Higher Education, which maintains that learning can occur outside the classroom, encourages learning goals that ingrate “curricular and co­ curricular facets of the institution.”5 Where information literacy is a co­curricular facet, it requires linkage to a program’s learning goals. The Middle States and other accredit­ ing organizations emphasize the importance of integrating assessment into the institution­ al culture and developing assessment plans that seek to achieve learning goals. Evaluation Evaluation at the course level involves judg­ ing the extent to which students grasp course content. Such judgments involve the assign­ ment of grades; “grades are determined by students’ ability to master the content of a course, not by any larger assessment of what A few good sources The Middle States Commission on Higher Education has produced two handbooks, Student Learning Assessment: Options and Resources (2nd edition, 2007) and Developing Research & Communication Skills: Guidelines for Information Literacy in the Curriculum (2003). For PDF fi les,see www.msche.org/publications_view.asp?i dPublicationType=5&txtPublicationType =Guidelines+for+Institutional+Improve ment. Linda Suskie wrote Assessing Student Learning (Anker Publishing, 2004), which complements the first handbook. She links student learning assessment to the planning process and different methods for gather­ ing and applying evidence about student performance. The Council for the Advancement of Standards in Higher Education (CAS) pro­ motes standards in student affairs, student services, and student development pro­ grams. It offers various resources related to student learning assessment (www.cas. edu). Stylus Publishing has produced a number of works related to assessment. Examples include Assessing for Learning (Peggy Maki, 2004) as well as works on rubrics and e­portfolios. Libraries Unlimited has published our Outcomes Assessment in Higher Education (2004) and Revisiting Outcomes Assess­ ment in Higher Education (2006) as well as Joe Matthews’s Library Assessment in Higher Education (2007) and The Evalua­ tion and Measurement of Library Services (2007).The Government Accountability Of­ fice has a series on evaluation research and methodology that covers case study evalu­ ations, designing evaluations, quantitative data analysis, performance measurement and evaluation, and more (www.gao.gov/special. pubs/erm.html). Finally, Sage Publications has an excellent book series on “research methods, statistics, and evaluation” (www.sagepub.com/home. nav?display=catcatLevel1=&prodTypes =any&level1=Course1007&currTree=Courses& _requestid=1570471). March 2009 147 C&RL News www.sagepub.com/home www.gao.gov/special www.msche.org/publications_view.asp?i has changed in the students’ understanding, attitudes, or perspectives.”6 In an organiza­ tional setting, evaluation provides evidence to distinguish between effective/effi cient and ineffective/inefficient programs, ser­ vices, and policies, and to address ques­ tions such as: • What improvements in a program, service, or policy might result in continu­ ous quality improvement and better ac­ countability? • How well does a program, service, or policy reach its target population and meet the group’s information needs and expectations? • Is the program, service, or policy be­ ing implemented in the ways envisioned? • Is the program, service, or policy ef­ fective and effi cient? Evaluation is also a political and mana­ gerial activity, which provides insights for making policy decisions and resource allocations. More on assessment Institutions often deploy a hierarchical framework with horizontal and vertical components to plan, gather, analyze, and report information, and to make changes to institutional and educational learning goals, as part of their effort to demonstrate accountability through assessment.7 When educators craft goals that represent the taxonomy of learning developed by Ben­ jamin S. Bloom,8 they “are challenged to consider and state explicitly what impact programs should have on students”9 and their ability to analyze, apply, comprehend, demonstrate, and synthesize. Assessment in the form of student learning outcomes is formative and seeks to improve the educational experience for current and future students. At the course level, individual faculty might use the one­ minute paper, which asks students to refl ect on what was covered in class that day and to suggest aspects that they still do not understand. At the start of the next class, those aspects are revisited. Assessment at the course level, therefore, is formative and evaluation is summative. When viewed at the program level, all of those engaged in formal classroom teaching might agree on a scoring rubric that divides a learning outcome into cat­ egories that vary along a continuum for the purpose of improving feedback on student performance and identify areas for student improvement. That continuum presents levels of achievement in some detail, and a rubric might be used across programs and perhaps for an entire discipline. Finally at the institutional level, evidence is gathered from a sample of students to reflect the student body in general and to demonstrate whether there is a progression in student learning. Conclusion Understanding the language differentiating assessment and evaluation is necessary to ensure appropriate planning, measure­ ment, and analysis as well as the prepara­ tion of a meaningful self­study report for the accreditation organization. Advancing student learning is a core purpose of higher education, and assessment examines educational quality in terms of how well institutions achieve their declared mission relative to student learning. Everyone in­ volved directly in teaching students has a stake in shaping program and institutional plans on assessment. As institutional researchers and fac­ ulty evaluate different methodologies for gathering evidence and consider research designs involving the use of sampling (e.g., probability and nonprobability), experi­ mental designs, and inferential statistics, librarians should join them in making sure the library is an integral, educational part­ ner. To do so, they need to understand and be able to conduct evaluation research as an inquiry process. Complicating matters, schools of library and information science often do not re­ quire courses on research and evaluation, let alone assessment, as part of the core 148C&RL News March 2009 curriculum. Students might encounter coverage of evaluation, and perhaps even assessment, in an elective course. Professional associations and libraries themselves often do not offer workshops on research as a process of formal inquiry or evaluation as applied to assessment. To us, this is a failure of the profession—one that can and should be rectifi ed through joint efforts. Notes 1. Student Learning Assessment: Options and Resources, 2nd ed. (Philadelphia, PA: Middle States Commission on Higher Edu­ cation, 2007), 5. 2. Ibid., 75. 3. Measure is not necessarily the same as countable. 4. Peggy L. Maki, Assessing for Learning (Sterling, VA: Stylus, 2004), 3. 5. Student Learning Assessment, 6. 6. Richard P. Keeling, Andrew F. Wall, Ric Underhile, and Gwendolyn J. Dungy, Assessment Reconsidered: Institutional Ef­ fectiveness for Student Success. Published by International Center for Student Success and Institutional Accountability (distributed by the National Association of Student Per­ sonnel Administrators, 2008), 9. 7. For a depiction of this complex framework see Robert E. Dugan and Peter Hernon, “Institutional Mission­centered Student Learning,” in Revisiting Outcomes Assessment in Higher Education, edited by Peter Hernon, Robert E. Dugan, and Candy Schwartz (Westport, CT: Libraries Unlimited, 2006), 6. 8. Benjamin S. Bloom, Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain (New York: David McK­ ay Co., 1956; Lorin W. Anderson and David R. Krathwohl, A Taxonomy for Learning, and Assessing: A Revision of Bloom’s Tax­ onomy of Educational Goals (New York: Longman, 2001). 9. Keeling et al., Assessment Reconsid­ ered, 25. March 2009 149 C&RL News http://muse.jhu.edu How Do You Improve Upon The Essentials? By adding more. More to see, more to do, more tools. Project MUSE has always been an essential online resource for faculty and students. Now, our new website offers even more, with greater functionality and more efficient search and discovery tools. And it’s easier than ever to use. With Project MUSE, you get: • Highly respected humanities and social science journals • The most current, full text, peer-reviewed content • Affordable pricing and great value So make us part of your essentials kit for online research. FREE 45-day trial at http://muse.jhu.edu/trialrequest