C&RL News October 2015 488 Do a quick Google search for assessment cycle or evaluation cycle and you’ll find thousands of variations. It’s easy for a newly emerging culture of assessment to stall as the participants agonize over which is the right way, which is the most thorough way, which is the perfect way to evaluate an instruction program. I’ve been through many assessment processes and have experienced those long pauses firsthand. I have come to realize that the first and most important step is to simply have a conversation. Yes, there are rigorous assessment projects that require exceptionally detailed methods and a close involvement with the institutional review board, and there are myriad models that have language similar to these questions and to each other. Yet so much of building and measuring an instruction program starts with everyone on the team—regardless of their level of assess- ment expertise—knowing what we’re doing and why and being able to clearly articulate it. Instruction and assessment scholars have written about the critical importance of col- laboration in building a culture of assessment, with a common emphasis on collegial, trans- parent processes.1 Whether leading a team of experienced evaluators or building a new assessment project from the ground up, care- ful reflection up front can facilitate smoother communication down the road. Seven questions The more assessment projects I participated in, the more I sought out examples of cycles, guiding questions, processes, and best prac- tices. I found myself returning to the funda- mental questions of who, what, when, where, why, how, and how well, and eventually those morphed into these seven questions. These seven questions can be a discussion starter or a thought exercise.2 The intent is to walk through the questions and document the answers. The resulting statement can then be used in multiple ways: as a management up- date, in a self-study report, in an assessment report, or as a small part of a larger initiative. 1) Responsibility: Who is taking respon- sibility and why? 2) Questions: What questions do we have about our own program and why? 3) Data: What information do we need to answer those questions and why? 4) Method: How will we get it and why? 5) Results: Who will write the answers and why? 6) Communication: Who needs to see the results and why? 7) Cycle: What is our timeline for changes and why? The following answers demonstrate a short form of the process I used when plan- Mary O’Kelly is head of instructional services at Grand Valley State University in Allendale, Michigan, email: okellym@gvsu.edu © 2015 Mary O’Kellly Mary O’Kelly Seven questions for assessment planning A discussion starter October 2015 489 C&RL News ning a three-year assessment of student reten- tion and library instruction at Grand Valley State University (GVSU). The answers are easy and focused. Who is taking responsibility and why? At GVSU, the head of instructional services has primary responsibility for evaluating the instruction program, although many others are involved. What questions do we have about our own program and why? For this project we want to know whether the library is a factor in student retention because retention is a primary focus at our institution. What information do we need to answer those questions and why? In order to answer that question, we need a list of all the classes that have had library instruction this year so that we know which students we reached. How will we get it and why? With the cooperation of all instruction librar- ians, who have the task of logging every instruction session, we will collect data using LibAnalytics. Who will write the answers and why? We don’t have access to Banner3 data and we don’t have any library statisticians, so we have built a relationship with experts in our Institutional Analysis department, who will analyze the results for us. Who needs to see the results and why? We will share the analysis with the entire Re- search and Instruction team, plus the Library Council (which includes the dean), the entire library staff, university faculty, and finally the library community in order to communicate our contribution (if any) to student retention. What is our timeline for changes and why? We will repeat this cycle annually at the end of the academic year so that we can show trends over time and so that we can track changes to our program. Combine those answers into one cohesive statement, however, and we have a powerful summary of our actions and intent. At GVSU, the head of instructional services has primary responsibility for evaluating the instruction program, although many others are involved. For this project we want to know whether the library is a factor in student reten- tion because retention is a primary focus at our institution. In order to answer that question, we need a list of all the classes that have had library instruction this year so that we know which students we reached. With the cooperation of all instruction librarians, who have the task of logging every instruction session, we will collect data using LibAnalytics. We don’t have ac- cess to Banner data and we don’t have any library statisticians, so we have built a relationship with experts in our Institutional Analysis department, who will analyze the results for us. We will share the analysis with the entire Research and Instruction team, plus the Library Council (which includes the dean), the entire library staff, uni- versity faculty, and finally the library community in order to communicate our contribution (if any) to student retention. We will repeat this cycle annually at the end of the academic year so that we can show trends over time, and so that we can track changes to our program. With a statement like that at the ready for each project, large or small, an assess- ment team would have shared language and nonassessment staff would have a clear understanding of the purpose and process. The seven questions in action These questions are easy to adapt to micro- projects and large-scale assessment projects C&RL News October 2015 490 alike. Every year I ask our Institutional Analy- sis department 23 questions about instruction, ranging from simple (How many students did we reach in direct face-to-face library instruc- tion?) to complex (Is there an intensity effect on GPA and retention of students who saw a librarian in class multiple times?). When I first started planning the annual assessment of library instruction I used an early variation of these seven questions when talking with my colleagues. I was new in the role and needed a manageable way to get started, so I focused on what we knew, what we wanted to know, and what we hadn’t yet learned. What did we want to know about our instruction program? And what assumptions needed to be challenged? We had assumed, for example, that the library reached most freshmen through Writ- ing 150 (an introductory composition class). The class – or its equivalent – is required, so it seemed natural to believe that it was the library’s main point of contact for freshmen. Shortly after I started as head of instructional services I had an informal conversation with our first year initiatives coordinator (also a new position at the time) about Writing 150. We both wondered just how many of those students met a librarian in class. We identi- fied our question, figured out what data we needed to answer the question, and listed the people who needed to know what we found. We were shocked to learn that we reached only 33% of freshmen via Writing 150, due to transfer credit, students testing out of the class, and alternatives offered by the honors college and other specialty programs. With just a few guiding questions we were able to articulate a plan, solicit help, and communicate what we learned. I also have used these questions as a tool for myself. Sometime as I start a project, es- pecially if it’s in unfamiliar territory, I like to sit quietly and write out a starter plan. Others may join me later, but my initial priming helps me stay focused on the outcome. Recently we completed SAILS on our campus. SAILS is a well-established instrument, so rather than focus on methods I instead focused on the reasons for using SAILS and how our re- sults would be communicated. After putting together a team of volunteers to work on implementation and evaluation of the results, we then answered the seven questions as a group. Our conversation was fluid and the outcome wasn’t as linear as the question list implies, but we still ended up with a summary of our process, a plan, and a timeline. It was a good way to share language and expectations about a big project. Those examples also illustrate how dis- cussion starters can be used for gathering relatively simple descriptive data, the kind that can be valuable when making decisions about a program but might not necessarily fit into the category of “assessing student learning.” Planning and communication are important regardless of the project’s magnitude. I participated in the first cohort of ACRL’s Assessment in Action program4 and have been involved in many assessment activities at our university since then. Resources such as Megan Oakleaf and Neal Kaske’s “Guiding Questions for Assessing Information Literacy in Higher Education”5 provide similar question-based op- tions and have helped me expand my assess- ment vocabulary with more nuanced language and deeper investigation of process. However, I still find myself going back to these seven questions as a flexible way to lay a founda- tion for just about any data-gathering project. As libraries are feeling increasing pressure to carefully document and communicate their value using sophisticated measures, having a ready-to-use process that is easily accessible to all staff can contribute to the development of a healthy culture of assessment. Notes 1. Two articles that detail these kinds of processes are Meredith G. Farkas and Lisa J. Hinchliffe, “Library Faculty and Instructional Assessment: Creating a Culture of Assessment through the High Performance Programming Model of Organizational Transformation,” Collaborative Librarianship 5, no. 3 (2013): (continues on page 494) C&RL News October 2015 494 the attention to discovery that most libraries give to materials that they purchase. I would suggest that there are a number of actions that librarians, OA publishers, and content creators can take to increase the visibility and discov- erability of OA publications and, in doing so, help realize their full value. Actions To achieve some economies of scale in library acquisition of OA publications, we should le- verage the library crown and work the library network. There’s no point in libraries all over the world laboriously replicating the same work of evaluation, selection, and acquisi- tion when they have the tools, methods, and community to work in collaboration. Subject specialists might organize themselves in clus- ters to share the initial work of discovery and establish criteria for evaluation that can be col- lectively trusted. Pursuant to this could emerge a shared acquisitions and description process. Meanwhile, libraries should ask of their suppliers (whether they be traditional publish- ers, libraries, or self-publishers) that OA publi- cations bear clear and transparent documenta- tion of editorial principles and process to assist in evaluating the publications for the library collection. Library publishers should set the bar high for OA publications by providing native metadata that will decrease the effort required to describe OA publications for inclusion in discovery tools. Finally, libraries should initiate conversations with commercial suppliers (of both cataloging services and library materials) to design feasible ways in which OA materials can be streamed into the acquisition workflow that has long supported libraries. Ultimately, we need to refrain from seeing OA as removed from the marketplace and thus unable to benefit from market mecha- nisms. OA operates in a marketplace of ideas, reputation, and scholarly value, as well as one where cash moves around. To realize and make manifest the full value of OA, it’s time to stop complaining about the weather and do something about it. With a little professional climate change, our users, both the local ones and those around the world, will be able to see OA publications in the clear light of day from miles away. Notes 1. Budapest Open Access Initiative Declaration, www.budapestopenaccessinitiative.org/read. 2. Are SpringerOpen journals indexed in bibliographic databases and search engines? www.springeropen.com/about/faq/indexing. 3. SHARE, www.share-research.org/. 4. CHORUS, www.chorusaccess.org/. 5. SHARE Notify, www.share-research.org /projects/share-notify/. 6. Recap of SHARE Community Meet- ing, Summer 2015, www.share-research. org/2015/07/share-update-july-2015/. 7. ACRL Scholcomm discussion list, w w w . a l a . o rg / a c r l / i s s u e s / s c h o l c o m m /scholcommdiscussion. 177–88, and Debra Gilchrist, “A Twenty Year Path: Learning About Assessment; Learning From Assessment,” Communications in Infor- mation Literacy 3, no. 2 (2009): 70–79. 2. The seven questions were first presented at the Michigan Library Association Academic Libraries 2014 Conference, May 2014. 3. Banner is an enterprise resource planning system used in higher education to manage student information, such as course registration, grades, major, transcripts, and advising. 4. “Assessment in Action: Academic Libraries and Student Success” is undertaken by ACRL in partnership with the Association for Institutional Research and the Association of Public and Land-grant Universities. The program, a cor- nerstone of ACRL’s Value of Academic Libraries initiative, is made possible by the Institute of Museum and Library Services. For more informa- tion see http://www.ala.org/acrl/AiA. 5. Megan Oakleaf and Neal Kaske, “Guiding Questions for Assessing Information Literacy in Higher Education,” portal: Libraries and the Academy 9, no. 2 (2009): 273–86. “Seven questions...” (continues from page 490)