Towards a Critical Assessment Practice
“We worry about disclosing data, but often do not consider the implications of creating data.” -Jeffrey Alan Johnson (2018, p. vi)
“A critical assessment practice starts with mindfulness.” -Sonia DeLuca Fernández (2015, p. 5)
In Brief
This article explores how librarians might meaningfully engage critical perspectives to interrogate the structures of power and methodologies that both motivate and facilitate assessment work in academic libraries. The authors aim to expand the current discussion of assessment in order to recognize and more effectively address issues of inclusion and inequality in this work. The article considers critical approaches employed in LIS, student affairs assessment, institutional and educational research, feminist and indigenous methods, and critical data studies, and draws on questions posed by these fields to imagine what inclusive assessment practices might look like for our own institutional contexts and to engage librarians in nuanced critical discussions about every stage of the assessment cycle.
by Ebony Magnus, Maggie Faber, and Jackie Belanger
Introduction
The authors of this article each came to their questions about library assessment, and what it might mean to be a critical assessment practitioner, through their individual experiences, positions, and identities. We want to share two stories illustrating our paths into this work. We do this not only to foreground how our own identities are connected to the questions we ask, but also to highlight some of the challenges that lie at the intersection of academic library assessment and critical librarianship.
Ebony: I am a woman of color. My racialized identity is not a prerequisite of my interest in critical discourse; in fact, growing up as a light-skinned, mixed race person I was, for some time, blissfully unaware (even when others were not) of my difference from the majority white populations around me. I have been fortunate to participate in LIS scholarship and training programs designed and reserved for people from traditionally underrepresented groups. Through these programs, I learned about the extent of the overrepresentation of dominant identities in this field, and its impact on library workers and library patrons from marginalized communities. Yet, I kept this knowledge at a remove from my work in library assessment—a specialization I was drawn to because of the perception that it is grounded in evidence (and who can argue with evidence?). Over time, however, my consciousness as a person occupying certain margins has come up against my seemingly depoliticized assessment work and acknowledging my own privilege and agency in it. I increasingly feel the irresponsibility of not making explicit the systemic forces that underpin decision-making in libraries and post-secondary institutions. No single event or epiphany occurred; but I would be remiss if I did not acknowledge that this sense of personal-professional tension gained momentum in the last 3-4 years (during which I was living in the United States before returning to Canada).
Jackie and Maggie: We attended an in-house professional development session focused on inclusive practices in libraries. At the session, participants were asked to consider how they might bring a critical perspective to a variety of areas, including reference, spaces, and collections. Participants were then prompted to consider how they might assess efforts to make library services, spaces, and resources more inclusive. What was striking in this conversation was that assessment and data gathering about students were treated in largely instrumental terms, consistently positioned—and not by us, the assessment librarians—as neutral, somehow outside of the questions about power, privilege, and inequity that shape all our work. This is not a critique of those colleagues (many of whom are engaged in critical librarianship and deeply committed to social justice in libraries): instead, this caused us to reflect on the misalignment between our values and the assessment practices and attitudes we had helped to foster, intentionally or not, in our own library’s culture. As white, middle-class, cisgender women, we had begun to think about critical practice around particular issues, such as how we recruit and represent student voices in our work, but had not felt the urgency to recognize and communicate how every aspect of our assessment work (and how we framed that work to ourselves and colleagues) was shaped by our identities, experiences, and institutional positions.
In conversation about our experiences, we discovered that we were grappling with similar tensions in our work and searching for ways to align our practices more fully and authentically with a commitment to equity, inclusion, and social justice. We are passionate about the possibilities assessment can offer in enabling libraries to support students and faculty; yet we understand that our assessment work is often deeply problematic in its alignment with a neoliberal culture of accountability and consumerism in higher education.1 While we find the critiques of assessment (such as the privileging of quantitative approaches) compelling, many often discuss assessment in ways that could be more nuanced and reflective of the complex questions we wrestle with in undertaking this work. We feel expectations from our colleagues, institutions, and profession to engage in practical assessments that result in demonstrable improvements for our students and faculty, yet often struggle to balance these expectations with the need to interrogate our practices and explore our own—and others’—unquestioned assumptions about assessment.
This article explores these tensions and how a critical lens can be brought to library assessment. It represents our efforts to engage in a reflective and intentional practice, heeding Emily Ford’s call to consider “what do we do and why do we do it?” (Ford, 2012). We don’t claim to have answers, or to have implemented all of these ideas in our day-to-day work. Instead, we draw upon scholarship in LIS, student affairs assessment, institutional and educational research, feminist and indigenous methods, and critical data studies to pose questions that help us imagine how we might do our work in different ways. In line with Selinda Berg’s recent work calling on librarians engaged in quantitative research and those involved in critical librarianship to explore ways they might partner productively in working towards greater social justice, we hope that this article can act as a “request for future conversations” (Berg, 2018, p. 234). In posing our questions, we hope to center critical approaches more firmly in assessment work and to contribute the perspective of assessment practitioners to conversations about assessment in critical librarianship.
We begin by framing our discussion in terms of recent critiques of library assessment and some responses to those critiques. We then explore how we might bring a critical approach to all aspects of the assessment cycle: from how we formulate questions, select methods, analyze and communicate data, to how we make decisions. We close by offering a series of questions we have been using to reflect upon assessment activities at our own institutions.
Critiques & Responses
The term “assessment” in higher education (and thus academic libraries) is often used to indicate a wide range of activities with varying purposes, stakeholders, and approaches, such as accountability, demonstrating value and impact, and improvement. As Wall, Hursh, and Rogers note, “There is no common definition for assessment in higher education. Rather, any definition grows out of social context. For some, assessment is about examining student learning, for others, examining programs, and still others, determining institutional effectiveness” (Wall, Hursh, & Rodgers, 2014, p. 6). The literature on library assessment also points to the myriad activities that are often bundled under the broad term “assessment” (Hufford, 2010) and the complexity of motivations for undertaking library assessments (Doucette, 2017). Much of this literature focuses on “practical” aspects of assessment (to use one of the themes from both the Library Assessment Conference and the 2017 Canadian Library Assessment Workshop)—such as discussions of specific methods, techniques or studies—rather than engaging a critical perspective focused on recognizing and challenging systems of power and privilege that pervade assessment work (Garcia, 2015; Nicholson, 2017).
With the publication of the Value of Academic Libraries Report in 2010, a dominant strand of discussion about library assessment, and much activity, has centered on one aspect of this work: how libraries can demonstrate their value and impact to a variety of institutional stakeholders such as students, faculty, administrators, and employers. Meredith Farkas notes the tendency within library discourse around assessment to conflate value research and assessment generally in the wake of the Value report (Farkas, 2013, p. 5; see also Nicholson, 2017, p. 2-3; Gardner & Halpern, 2016, p. 42). Farkas argues that assessment is primarily about improvement, while demonstrating value is focused on accountability and largely underpinned by quantitative approaches centered on elements such as showing correlations between use of the library and student success (Farkas, 2013, p. 4). Critical responses to this particular strand of assessment focused on value (and the measures and approaches it is seen to privilege) highlight how it functions to institutionalize “neoliberal ideology within librarianship” (Seale, 2013, p. 42), with the push to demonstrate library value “normaliz[ing] corporate value” and “capitalist language” in higher education and academic libraries (Nicholson, 2017, p. 3 & 4). As Nicholson notes, quoting Eisenhower and Smith, “in the current climate of accountability and austerity, libraries have become veritably ‘obsessed with quantitative assessment, student satisfaction, outcomes, and consumerist attitudes towards learning’” (Nicholson, 2015, p. 331).
Libraries’ reliance on surveys generally, and on widely used assessment tools such as the service quality survey LibQual in particular, reinforce the notion that library assessment often focuses heavily on quantitative tools designed largely for accountability, to answer questions of “how many” and “how often” rather than “why” (Halpern, Easker, Jackson, & Bouquin, 2015), and to gauge “perceptions of customer service” that position library patrons as consumers (Lilburn, 2017, p. 101). An over-emphasis on assessment for immediate improvement and “practical” assessment are also critiqued for often adopting the language and mindset of the market: in this model, students become customers or consumers whose individual, immediate needs must be satisfied in order to retain library market value in an increasingly competitive educational landscape (Quinn, 2000; Nicholson, 2015 & 2017; Fister, 2012).
There is a significant strand of scholarship engaged with exploring alternatives to this version of assessment, with works posing and responding to questions similar to Karen Nicholson’s: “how might we engage critically with quality assurance and assessment to better align them with our professional values and the academic mission of the university?” (2017, p. 3). A key theme of recent work in this area focuses on critical library pedagogy and information literacy assessment (Accardi, 2010 & 2013; Gardner & Halpern, 2016; Gammons, 2016). These authors argue against reducing student learning assessment to simple quantitative measures and seek ways to engage in assessments that disrupt “traditional power relationships between student and teacher” and “privilege student voices and expose oppressive dominant culture so that society can be transformed” (Accardi, 2013, p. 79). The emphasis on alternative approaches to understanding the complexity of student learning is echoed in library assessment more broadly in calls for the use of qualitative methods such as ethnography, which can provide an “antidote for the problematic reliance in higher education (including libraries) on analytics and quantitative measures of institutional effectiveness,” as well as assessment approaches that are often focused solely on short-term data-gathering projects and immediate improvements (Lanclos & Asher, 2016, p. 2). Lanclos and Asher call for a “fully engaged ethnography, wherein libraries can actually be thought about and experienced differently, not just be re-arranged” (p. 2).
These critiques and responses are important and necessary. We agree with many of them, even while we acknowledge that they sometimes do not fully engage with the complexity of assessment work as we recognize it, which involves multiple decisions about the purposes of assessment, the questions we ask, the methods we choose, the user communities we engage with (and how we engage with them), the ways we analyse and use data, and the choices we make as a result of the assessment. This article aims to address this complexity by bringing a critical perspective to various aspects of the assessment cycle. Our focus here is on assessment as a set of practices that help libraries to understand and engage meaningfully with our diverse user communities. Lilburn calls for additional work to explore how other assessment methods beyond customer satisfaction surveys “might provide information that is more helpful and more meaningful to the work of librarians and to the advancement of the library as an academic unit devoted not to customers, but to students, scholars, researchers, and citizens” (2017, p. 104-05). We argue that it is not just “other forms of assessment” that might be valuable—or a shift to alternative methods or how those methods are deployed—but that we must interrogate all aspects of assessment in order to begin to imagine what critical assessment might look like.
Critical Assessment
Practitioners and scholars in the areas of student affairs assessment and Institutional Research (IR) have explored in depth what it means to practice assessment in ways that are attentive to power dynamics and questions of equity and inclusivity. One thread in IR, for example, focuses on what it means to be a “quantitative criticalist” and whether it is possible to bring the critical theoretical perspectives largely seen in qualitative research to bear productively on the quantitative work of IR professionals (Stage & Wells, 2014, p. 2). Being a quantitative criticalist involves using “data to represent educational processes and outcomes on a large scale to reveal inequities and to identify social or institutional perpetuation of systematic inequities” and questioning “the models, measures, and analytic practices of quantitative research in order to offer competing models, measures, and analytic practices that better describe experiences of those who have not been adequately represented” (Stage, 2007, p.10). Since this concept was proposed by Stage in 2007, it has been taken up in two subsequent volumes of New Directions for Institutional Research in 2014 and 2015, attesting to the robustness of the discussions on the topic. Critical assessment in the student affairs field has been articulated along two strands: the first, similar to the quantitative criticalist work proposed by Stage, explores how assessment might be mobilized for a social justice and equity agenda; the second, related strand brings a critical theoretical lens to various assessment practices (DeLuca Fernández, 2015).
Writing in the journal Research & Practice in Assessment, Wall, Hursh, and Rodgers (2014) propose an “alternative conceptualization of assessment as an ethical, value-based social practice for the public good” (p. 5). Explicitly acknowledging assessment as a social and political act enables a move towards critical assessment, which “expose[s] and address[es] power, privilege, and structures” and “makes explicit the assumptions and intentions” underlying assessment choices (DeLuca Fernández, 2015, p. 5). For DeLuca Fernández, as well as Wall, Hursh, and Rodgers, assessment is framed as inquiry, reflection, and “mindfulness” about our own positions in relation to assessment work and the interests served by any assessment activity. Every aspect of the assessment cycle must be scrutinized and reflected on: “In order for assessment to be critical, practitioners must adopt an equity orientation when approaching each phase of the assessment cycle by considering positionality, agency, methodological diversity, and analysis” (Heiser, Prince, & Levy, 2017, p. 4).
As assessment practitioners, we are called upon to consider how our power and privilege shape every aspect of our work. In the following sections, we explore various aspects of the assessment process through this critical lens. In all the stages, considerations of our own positionality in relation to our work, as well as how we collaborate with students and faculty as partners in assessment, is essential. Furthermore, we “must identify and make clear [our] position relative to the work to be done. … An ethical practice of assessment asks those engaged in assessment to identify whose interests are being served in conducting a particular assessment process” (Wall, Hursh, & Rodgers, 2014, p. 12). In other words, critical assessment asks us to reflect continually on the question “By whom and for whom?” (McArthur, 2016, p. 978).
Defining Purposes, Asking Questions, & Recruiting Participants
The assessment cycle is typically depicted as a continuous loop without a definitive beginning or end. In practice, however, the decision to take on a project in the first place precedes our entry into this framework. In considering a project, practitioners may ask why it is necessary, what questions we hope to answer, and who we expect to be involved. We may consider the potential impact of a project on user populations, particularly around issues like survey fatigue and over-surveillance of particular groups. A more fully developed critical practice would extend these questions into an exploration of our own biases and those of our institutions and how they influence all stages of the assessment cycle. Critical practice asks us to explicitly acknowledge the individual and institutional agendas that precede the matching of question to method—who decides that an assessment project is needed in the first place, and why—as these perspectives influence the way in which a particular method may be deployed and the possible outcomes it may produce. Surfacing the interests driving the decision to undertake an assessment project can also enable us to be aware of how assessments can be mobilized to justify a decision that has already been made, what Hinchliffe calls “decision-based evidence-making” as opposed to evidence-based decision-making (2016, p. 13).
Even library workers attuned to critical practice can pursue questions that, while well-intentioned, risk unintended consequences for users. An overemphasis on studying students who already experience library anxiety or feel alienated by the library environment, for example, has the potential to reinscribe marginalization and feelings of otherness without care or meaningful change. As Hurtado (2015) puts it, “[i]t is not enough to demonstrate differences and inequality—we have plenty of studies that show disparities….but many of these studies fail to engender changes in society or higher education” (p. 290). Taking action based on the information we gather, an important way to respect the time and contribution of our participants, is even more crucial in relation to voices usually relegated to the margins: “Despite reports of how detrimental such disparities are to the larger social good, we can only conclude that the normative culture is invested in these inequalities in ways that complicate change” (Hurtado, 2015, p. 290).
Involving users and stakeholders in project design or question formation is one way of fostering a broader sense of inclusion in the work we do. As Heiser, Prince, and Levy (2017) suggest, “Inviting additional voices to discuss assessment processes such as determining what to measure, which questions to ask, what methods to use, and how to analyze and report findings can address positionality and subjectivity as well as give agency to stakeholders” (p. 5). As we discuss in more depth in later sections, involving students and faculty as partners in the process “recognizes the agency of participants” (Heiser, Prince, & Levy, 2017, p. 6), although we acknowledge that this must be done in ways that do not simply shift the burden of labor onto these partners.
Similarly, inclusive recruitment methods ensure that, when participant collaboration is undertaken, it occurs from (or at least aspires to) an equitable stance. The authors often use sampling and recruitment methods that appear deceptively neutral (survey sampling based on year in school to get a “representative” sample of the student population) or use convenience samples for smaller-scale, targeted projects. A more critically reflective assessment practice suggests we interrogate how these approaches exclude many of our students and faculty. In their recent conference presentation “How White is Your UX practice?” Larose and Barron (2017) discuss a number of strategies for recruitment and research design in user experience research that meaningfully includes diverse voices and experiences. They call for addressing the “role of unconscious bias in selecting participants”: “For example, with the grabbing-students-who-happen-to-be-around method of recruitment, there is a high chance your unconscious bias is influencing who you pick. This way of recruiting users can be quick and easy but it creates an exclusive structure where your users from marginalized groups are at a disadvantage” (p. 27).
It is not simply our own unconscious biases that shape who we select as participants; it is also that our assessments can operate in an echo chamber shaped by white cultural norms and values that largely include only those who are already connected with the library. For example, our space assessments at the University of Washington Libraries (including paper surveys handed out to patrons entering the library, space counts, and observations) are intended to improve spaces for our students and faculty, yet draw entirely on those who are already in, and likely most comfortable in, those spaces. Brook, Ellenwood, and Lazzaro (2015) explore in detail how university and library spaces reinforce institutional and individual power “invariably connected to a normative (male, able-bodied, upwardly mobile) Whiteness” (p. 256) and exclude those whose cultures and ways of working do not fit into white cultural norms. If our assessments of library spaces simply focus on improving what is already there, and hear only from those who are in those spaces already, we doubly exclude many of our patrons. A critical approach to space assessment, with the aim of creating spaces that are more responsive to the needs of many communities, requires us to move beyond the privileged space of the library itself to learn about the “communities that should be served by [library] spaces—what their needs, histories, and experiences are—and including them in decision-making processes over library spaces” (Brook, Ellenwood, & Lazzaro, 2015, p. 261). This involves not just more intentional recruitment (reaching out to student organizations representing a broad spectrum of identities, for example), but also asking if we might engage with students in the spaces in which they are comfortable and feel most empowered.
Methods & Data Gathering
Building on those reflections about why a project might be undertaken, how questions are formed, and who is involved, critical practices can be brought to bear on the selection and implementation of research methods. It is understood in library assessment that responsible practitioners choose a methodology based on the questions they intend to explore. However, Heiser, Prince, and Levy (2017) point to a different mindset when oriented to goals of social justice. Rather than looking for the “right” type of data to “best” answer the question, they suggest “practitioners operating from an equity orientation pose questions such as: Will this method reinforce a power dynamic? Does this method work for this population (e.g., survey or storytelling)? What additional method would provide a more comprehensive narrative around a program or service?” (p. 7). To understand why methods might or might not work for a population and the power dynamics at play requires attention to the often complex histories of those methods. Practitioners can reflect on the meanings and contexts of the methods they choose, who those methods might include or exclude, and explore ways to use methods in more critical ways.
Due to their prominence in library assessment, we will focus on examples relating to the context of survey and ethnographic methodologies and imagine ways in which methods might be enacted more critically. We will also discuss participatory design, which provides valuable perspectives on more inclusive approaches to assessment.
Survey Methods
As we noted above, surveys saturate the library assessment and LIS landscape (Halpern, Easker, Jackson, & Bouquin, 2015). Their widespread adoption is not hard to understand: as we have found in our own work, they can be deceptively easy to design, create, and distribute using free software and can be used to gather large data sets on a range of topics. However, the instrument itself does not provide neutral insight on the topic it addresses—nor is it necessarily harmless to the communities from which it extracts data. Survey methods are “root[ed] in the tradition of positivism, which embraces the pursuit of knowledge that is objective and value free” (Miner, Jayaratne, Pesonen, & Zurbrügg, 2012, p. 241). The notion that surveys produce objective data sets disregards the researcher’s positionality and influence: researchers decide what to ask, how to ask it, and, in many cases, what answers to provide. The experience the researcher or assessment practitioner brings to the subject frames the way in which the respondent expresses their own experience. This condition can limit the capacity of the respondent to be “counted” in a way that honors their identity, especially if there is a significant difference in the identities of researcher and subject. This section explores one aspect of survey methodology relating to demographic categories as an example of the ways we might ask critical questions as we employ these methods in library assessment.
The demographics of participants—in terms of both recruitment and the questions on the survey instrument itself—can be lightning rods for questions about inclusivity and representation. A critical lens on sampling and recruitment asks us to move beyond “representative” sampling to a broader and more inclusive approach to address inequities in our assessments. There is a significant body of literature on demographic factors affecting survey response or non-response: these factors include access to the internet (for web surveys), age, race, socioeconomic status, and gender (Fan & Yan, 2010). In surveys of the general population, women, white people, and those who are affluent and have higher levels of education are more likely to respond (Porter & Whitcomb, 2005, pp. 132-33). Some of the same demographics hold true for student populations, with women and white students more likely to respond to surveys. In their study of non-response rates of college students, Porter and Whitcomb (2005) add that students with higher GPAs are also more likely to respond, while those on financial aid are less likely to do so (p. 144). If we add considerations of ability (are our web surveys truly accessible?), time (students with families or multiple jobs or non-tenured faculty), and language (international students and those whose first language is not English) we must acknowledge that our library surveys are probably providing a deeply skewed understanding of our users and their needs. A library survey methodology focused on addressing inequities in library services, spaces, and resources would move beyond reliance on a “representative” sample and develop strategies to actively recruit students less likely to respond, possibly using snowball sampling or other peer-to-peer models for various communities.
In asking survey questions about the demographics of participants, D’Ignazio and Klein (2016) ask us to “rethink binaries” (p. 2), to use “strategies premised on multiplicity rather than binaries, and acknowledge the limits of any binaristic view” (p. 2). They challenge us to “inquire how the processes associated with data collection and classification…might be made to better account for a range of multiple and fluid categories” (p. 2). Our own conversations about participant demographics demonstrate just how challenging this can be. During the development of the CAPAL/ACBAP2 2018 census of Canadian academic librarians, the working group responsible for its design, distribution, and analysis spent hours discussing how to frame questions about gender, race and ethnicity, and indigenous descent. The working group’s commitment to inclusion and respect for the identities of respondents was challenged by the logistics of question design that would produce clean, well-formatted data. In an attempt to both reconfigure the gender binary and avoid reinscribing an implied difference between “normal” (i.e., “male” and “female”) and “other,” the census provided six possible responses for gender, including “prefer not to say” and an optional text box. For the race and ethnicity question, however, the working group relied on sub-categories associated with the Canadian government’s legal definition of “visible minority”.3 While the group understood that the rigidity of these categories can feel exclusionary to some, the widespread social and political acceptance of the legal terms protected the group from making difficult decisions about which identities to include or exclude. The Canadian government similarly defines “Aboriginal peoples” or “Indigenous peoples”;4 yet the group chose in this case to consult with multiple Indigenous librarians and experts in Indigenous studies to ensure they did not alienate respondents in the responses they offered to the question of Indigenous descent.5
None of these options—custom or open-ended fields, legally entrenched categories, or community-derived responses—can account for all the “multiple and fluid” identities of participants. However, we would suggest that critical practice does not involve landing on the perfect survey question to avoid these complexities. Instead, the conversation itself—particularly asking how the data will be used to work toward identity-affirming goals or as a precaution against discrimination—is part of that critical practice. As critical assessment practitioners, we are also conscious of asking ourselves whether we need to collect demographic data at all unless it will be used to “reveal inequities” (to return to Stage’s conception of quantitative criticalism) and inform change.
As practitioners and survey designers, we face a constant push and pull between a desire for simplicity and the responsibility we feel to recognize the complexity of variegated identities. Effective survey design in and of itself is challenging, as evidenced by the many books and articles that cover what questions to ask and how to ask them. Critically interrogating each question and concept adds to the challenge, and yet “asking different questions and being explicit and unambiguous about the purpose of the research are two small steps toward beginning discussions…to shift from quantitative research efforts towards a more critical quantitative approach” (Berg, 2018, p. 230).
Ethnographic Methods
Ethnography is often framed as a solution to library assessment’s preoccupation with quantitative data, such as that derived from surveys, because of its potential to “[reveal] connections, meaning, and patterns” (Lanclos & Asher, 2016, p. 5). Ethnographic research methods do not seek to produce generalizable results, but rather highlight the richness and specificity of communities and allow multiple narratives to surface. Yet historically, ethnography has exhibited a problematic dichotomy in which the ethnographer’s authority in defining a culture is, at times, given more weight than members of the culture being studied, leading to potential appropriation and harm to the community. Critical ethnography acknowledges the positionality of the researcher and calls upon the researcher to leverage their position precisely to emphasize and disrupt the imbalance between “legitimate” and “illegitimate” research and evidence. It centers the community and attends closely to the purpose to which the research is put. Thomas (1993) describes critical ethnography as “conventional ethnography with a political purpose” (p. 4) and says “[c]ritical ethnographers describe, analyze, and open to scrutiny otherwise hidden agendas, power centers, and assumptions that inhibit, repress, and constrain. Critical scholarship requires that commonsense assumptions be questioned” (p. 2-3).
Indigenous scholars similarly argue for the denaturalization of imperialist methodologies by interrogating the reification of Western research that simultaneously acknowledges the existence of non-Western worldviews and represents them back to the West as inferior and static (Tuhiwai-Smith, 1999, p. 48). In discussing indigenous research, Gaudry (2011) asks the researcher to reframe this relationship as research with and for indigenous peoples as opposed to research on indigenous peoples (p. 134). It is not our intention to draw a comparison between indigenous communities (at the center of Gaudry’s work) and the sum of students and faculty at the center of our library assessment work. We acknowledge the different (though sometimes overlapping) structures of privilege and power at play in these distinct contexts. But we find value in Gaudry’s definition of insurgent research as a means of directing the researcher’s responsibility toward the community and the recognition that participants should be respected as experts in their own experiences.
We have begun to explore in our own work ways in which we might engage in assessments informed by a more critical ethnographic approach—with and for our patrons in ways that honor their experiences and expertise. At the University of Washington, we asked two student employees to take responsibility for qualitative observations during many of our mixed-methods space assessment projects. We provided them with training in observation and ethnographic methods and asked them to carry out the project themselves; they determined where and when to conduct the observations and what kinds of behavior to record; they developed their own codebook and synthesized and drafted recommendations based on the observations and their own experiences as students.
We are conscious that neither they nor we ourselves are anthropologists or experts in ethnography. Nevertheless, in our tentative experiment with critical qualitative methods, we endeavored to empower students themselves to define, scope, and carry out the project as a substantive contribution to the broader space assessment project. Their observations were contextualized by the space counts and survey results we gather through other methods, but, in an effort to preserve the authenticity of their observations, their recommendations were incorporated into our reports and discussions with little modification. However, we are also aware that this approach is limited in terms of its adherence to critical principles. We employ these students, which imposes a particular power dynamic, and we worked with them for several months or years prior to this project. Their understanding of what behavior to report and how it might be used is invariably shaped by our own understandings and the way we have trained them and asked them to complete projects over the duration of their employment with us.
While our exploratory efforts to use qualitative methods such as ethnography in critical ways have been imperfect, our attempts to understand and address the complex histories of the methods we use will continue. Gaining a better understanding of critical ethnography and indigenous responses to ethnographic methods can open up important questions about the power dynamics inherent in our assessment practices. Without a critical consideration of the potential of ethnographic methods to reproduce inequitable relationships and reinforce dominant ways of understanding the world, we risk producing “clever description, based not on the needs of the researched but instead on the needs of the researcher” (Thomas, 2010, p. 480). In this, we echo Lanclos and Asher’s call for careful application of ethnographic methods, not only to avoid the “ethnographish” approaches to assessment they critique but also to avoid replicating the harmful framing and exploitation in ethnography’s history.
Participatory Design
In addition to taking a more critical lens to our use of surveys and ethnographic methods, we have also begun to explore user experience approaches such as participatory design for their potential to help us move towards a more thoughtful and inclusive assessment practice. Participatory design is not a singular method, but rather “a socially-active, politically-conscious, values-driven approach to co-creation” (Young & Brownotter, 2018, p. 2). Principles of participatory design foreground the subject as collaborator, expert, and primary stakeholder. As a practice, it is gaining ground in library assessment, and combines indigenous research methods’ focus on power dynamics with critical ethnography’s focus on political change by highlighting the “imbalance of knowledge and power where it exists, and working toward a more just balance” (Young & Brownotter, 2018, p. 2). However, “there is a critical difference between going through the empty ritual of participation and having the real power needed to affect the outcome of the process” (Sanoff, 2014, p. 590). As we have argued throughout this article, it is important not simply to adopt participatory design as an alternative approach, but to reflect on how the method is used, by whom, and for whom. In conducting a participatory design project out of convenience, one author found her own lack of critical reflection limited the generative potential of the work.
In a project at Michigan State University, one of the authors worked with students (not employed by the Libraries) to conduct a space study of an under-utilized study area in the main library (Colbert, Dallaire, Richardson, & Turner, 2015). Though the collaboration was participatory in nature, with the students independently selecting a mixed-methods approach which involved conducting interviews, observing the space in use, and asking other students to draw their ideal study space, library staff narrowly focused their efforts, asking the students to answer the question “how can we design a space that is attractive and comfortable, while also encouraging active use and student success?” Inadvertently, this question preempted the recommendations the students might have made by foregrounding prior notions of comfort, aesthetic appeal, and the intent for the space to remain a public study area. On reflection, we now ask how the students might have conceived of the space and their relationship to/in it had staff not guided them to a predetermined outcome. The primary motivation in engaging the students was to complete a project staff would not otherwise have had time to do. We understood the basics of participatory research, but were unaware of the social or political roots. While the project was ultimately a positive experience for the students, we did not thoroughly consider the imbalance of authority and transparency in the working relationship. This example illustrates the importance of critical reflection even when enacting the most liberatory of research practices, in order to move into meaningful co-creation.
Data Analysis, Decision-making, & Communication
We have so far considered elements of the assessment cycle such as question formulation, participant recruitment, and methodological choices. We turn now to how we might view data analysis and interpretation, and the decision-making processes we use to take action on that data, through a critical lens. A critical perspective on data analysis relies on a perception of data not as objective truth but as subjective, situated, constructed, partial, and political. Understanding the limits of data stretches beyond understanding the limits of a particular method or data collection technique. In “Data Humanism,” Lupi pushes for a greater understanding of data itself, saying “Numbers are always a placeholder for something else.” Lupi says: “Data represents life. It is a snapshot of the world in the same way that a picture captures a small moment in time…Failing to represent these limitations and nuance and blindly putting numbers in a chart is like reviewing a movie by analyzing the chemical properties of the cellulose on which it was recorded” (Lupi, 2017, p. 2).
In a set of guidelines for enacting critical data principles, D’Ignazio and Klein (2016) draw on feminist theory, which “seeks to challenge claims of objectivity, neutrality, and universalism” (p. 2). Rather than reporting from a “view from nowhere,” they encourage analysis and designs that “facilitate pathways to multiple truths” (p. 2). Specifically, their call to “explicitly valorize marginal perspectives” echoes Berg’s to “examine the outliers.” Berg (2018) explains that:
Descriptive statistics frequently focus on the qualities of the majority and report the average responses. We can understand more holistically the populations libraries support and serve by delving into and trying to understand those outside the majority, because the outliers are no less important despite their smaller numbers. In fact, increasing our understanding of the commonalities, qualities, and needs of the outliers will facilitate our abilities to better reach those who are often overlooked, underserved, and disregarded. (p. 231)
D’Ignazio and Klein (2016) push further, calling for a “two-way relation between subject and object of knowledge” (p. 3). In a recent Ithaka S+R report on student success, researchers asked students to provide their own definitions of the term (Wolff-Eisenberg & Braddlee, 2018). Like “library value,” demonstrating “student success” is a goal in much of library (and higher education) assessment. Rarely, however, is student success defined by students themselves; instead it is usually understood as the attainment of certain institutional metrics or benchmarks. Inviting library users into projects as experts as well as participants, and relying on their interpretation and recommendations to guide data analysis, “strengthen[s] core assessment practices and advance[s] equity efforts by centering the lived experiences of populations typically left at the margins by examining how meaning is assigned to data and employing collaborative approaches to analysis and reporting” (Heiser, Prince, & Levy, 2017, p. 9). This model allows participants to act as authorities themselves and to exert control over the narrative and decisions.
Young and Brownotter (2018) demonstrate this ethos in their description of a project that, through several rounds of storytelling, collaboration, and co-creation between library staff and Native American students, produced “a seven-part poster series and social media campaign in support of [the] university’s Native American population” (p. 9). Beyond the creation of the campaign, the students also recommended where and when the materials should be displayed physically and digitally. Quoting Hudson (2017), Young and Brownotter note that “‘to be included in a space is not necessarily to have agency within that space.’ To be present is not the same as to participate.” (p. 7). Enabling users to make these decisions requires us to “relinquish the notion of total control over space and instead empower students, faculty, and community members to take ownership of academic libraries and use them as sites of social justice” (Brook, Ellenwood, & Lazzaro, 2015, p. 261).
Sharing data analysis and decision-making power with library users does not negate the need for practitioner mindfulness. Assessment practitioners should interrogate their own positionality and power in relation to the data and in relation to those they ask to participate in the analysis. Heiser, Prince, and Levy (2017) ask assessment practitioners to consider “how do one’s identities or lived experiences influence data analysis? Do institutional values and norms influence data processing? Who are the findings serving?” (p. 9). D’Ignazio and Klein observe that answering these questions requires us to interrogate the values that structure the analysis in the first place and to observe the impact of previous projects—closing a different sort of assessment loop. They ask “when do values often assumed to be a social good, such as ‘choice,’ ‘openness,’ or ‘access’ result in disempowerment instead?” (p. 3; see also Johnson, 2014).
While meaningful and substantive collaboration has been a recurring theme throughout this article, it is particularly resonant in regards to data analysis, which tends to limit the voices and perspectives to a limited number of experts. Bringing in the voices of the population under inquiry, particularly as part of the sense-making and analysis process, and creating opportunities for them to make decisions about the kinds of actions to be taken in response help embody some of the critical data principles recommended by D’ignazio and Klein.
Conclusion & Questions
Throughout this article, we have attempted to pose questions about various aspects of our assessment work in order to imagine what a more critical assessment practice might look like. These questions are drawn from a diverse set of fields and practices, including LIS, student affairs assessment, institutional and educational research, feminist and indigenous methods, and critical data studies. This work has enabled us to develop a series of questions we are exploring in intentional and transparent ways (to ourselves, our colleagues, our students and faculty, and our profession) as we undertake assessment:
- How do our own identities, institutional positions, and perspectives shape our work?
- What is the purpose of the assessment, who decides what to assess, and who benefits from the work?
- How can we more intentionally recruit library patrons to participate in assessments, and whose voices are privileged in our recruitment practices? How do we avoiding essentializing communities at the margins?
- What are the histories and contexts of the methods we choose, and how do these shape our work? Do these methods risk alienating or silencing other voices?
- What is considered “evidence” and who decides?
- Are we engaging participants in meaningful discussions about privacy and who owns their data and empowering them to make decisions about how their data will be used, and by whom?
- How are we analyzing, interpreting, communicating, and acting on our data? Are we engaged in data analysis and interpretation as a collaborative and social practice? (Wall, Hursh, & Rodgers, 2014).
- Are we doing our work in ways that enable power sharing and engagement with user communities at all stages of the process, from question formulation and data analysis, to decision-making?
As our examples show, this work is complicated and requires considerable thought and effort. We recognize that there are substantive barriers to engaging in this reflective practice (time, resources, institutional pressures). The tensions we discussed at the start of the article are unresolved, but asking these questions has enabled us to consider how our own emphasis on the expedient and practical may have prevented us from asking important questions about how power and privilege shape “everyday” academic library assessment work. Posing these questions does not dismantle the structures of power within which we work, but we hope that by imagining different ways to do our work, we begin to open the door to more widespread critical assessment practice.
Acknowledgments
The authors would like to thank Jenna Nobs and Michelle May (former University of Washington assessment student employees) for their contributions and insights as we explored new ways of working. The authors would also like to thank Maurini Strub, Maura Seale, Ian Beilin, Steve Hiller, and Reed Garber-Pearson for their valuable feedback on this article.
References
Accardi, M. (2013). Feminist Pedagogy for Library Instruction (Gender and sexuality in information studies Feminist pedagogy for library instruction). Sacramento, CA: Library Juice Press.
Accardi, M. (2010). Teaching against the grain: Critical assessment in the library classroom. In Accardi, M., Drabinski, E., & Kumbier, A., Eds. Critical library instruction : Theories and methods. Duluth, MN: Library Juice Press.
Association of College and Research Libraries (2010). Value of academic libraries: A comprehensive research review and report. Researched by Megan Oakleaf. Chicago, IL: Association of College and Research Libraries. Retrieved from: http://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/value/val_report.pdf
Berg, S. (2018). Quantitative Researchers, Critical Librarians: Potential Allies in Pursuit of a Socially Just Praxis. In K. Nicholson & M. Seale, The Politics of theory and the practice of critical librarianship, pp. 225-35. Sacramento, CA: Library Juice Press. Retrieved from https://scholar.uwindsor.ca/leddylibrarypub/52.
Brook, F., Ellenwood, D., & Lazzaro, A. (2015). In Pursuit of Antiracist Social Justice: Denaturalizing Whiteness in the Academic Library. Library Trends, 64(2), 246-284.
Busch, L. (2017). Knowledge for sale : The neoliberal takeover of higher education (Infrastructures series). Cambridge, MA: MIT Press.
Colbert, D., Dallaire, E., Richardson, M., & Turner, K. (2015). 3W library redesign: Results report [internal report]. East Lansing, MI: Michigan State University Libraries.
Crown-Indigenous Relations and Northern Affairs Canada. (2017). Indigenous peoples and communities. Retrieved from https://www.rcaanc-cirnac.gc.ca/eng/1100100013785/1529102490303. Accessed 8 September 2018.
D’Ignazio, C. & Klein, L. (2016). Feminist Data Visualization. Published in the proceedings from the Workshop on Visualization for the Digital Humanities at IEEE VIS Conference 2016. Retrieved from: http://www.kanarinka.com/wp-content/uploads/2015/07/IEEE_Feminist_Data_Visualization.pdf.
DeLuca Fernández, S. (2015) Critical assessment. Webinar delivered for Student Affairs Assessment Leaders (SAAL) Structured Conversations series. December 9, 2015. Retrieved from: http://studentaffairsassessment.org/files/documents/SAAL-SC-Critical-Assessment-sdf-9-dec-2015-FINAL.pdf.
Denzin, N. & Giardina, M., Eds. (2017). Qualitative inquiry in neoliberal times. New York: Routledge.
Doucette, L. (2017). Acknowledging the Political, Economic, and Values-Based Motivators of Assessment Work: An Analysis of Publications on Academic Library Assessment. In Baughman, S., Hiller, S., Monroe, K. & Pappalardo, A., Eds. Proceedings of the 2016 Library Assessment Conference. Washington, D.C.: Association of Research Libraries. Retrieved from: http://old.libraryassessment.org/bm~doc/proceedings-2016.pdf.
Eisenhower, C. & Smith, D. (2010). The library as stuck place: critical pedagogy in the corporate university. In Accardi, M., Drabinski, E., & Kumbier, A, Eds. Critical library instruction: Theories and methods. Duluth, MN: Library Juice Press.
Fan, W., & Yan, Z. (2010). Factors affecting response rates of the web survey: A systematic review. Computers in human behavior, 26(2), 132-139.
Farkas, M. (2013). Accountability vs. Improvement: Seeking Balance in the Value of Academic Libraries Initiative. OLA Quarterly, 19(1), 4-7. Retrieved from: https://pdxscholar.library.pdx.edu/ulib_fac/74/.
Fister, B. (2012). The Self-Centered Library: A Paradox. Library Babel Fish, Retrieved from: https://www.insidehighered.com/blogs/library-babel-fish/self-centered-library-paradox.
Ford, E. (2012). What do we do and why do we do it? In the Library with the Lead Pipe. Retrieved from: https://www.inthelibrarywiththeleadpipe.org/2012/what-do-we-do-and-why-do-we-do-it/.
Gammons, R. (2016). Incorporating critically conscious assessment into a large-scale information literacy program. In Pagowsky, N., & McElroy, K., Eds. Critical library pedagogy handbook. Volume 2, Lesson plans. Chicago, IL: Association of College and Research Libraries, pp. 235-240.
Garcia, K. (2015). Keeping up with … Critical librarianship. Association of College & Research Libraries. Retrieved from: http://www.ala.org/acrl/publications/keeping_up_with/critlib.
Gardner, C.C., & Halpern, R. (2016). At odds with assessment: bring a critical educator within the academy. In N. Pagowsky and K. McElroy, Eds. Critical Library Pedagogy Handbook. Volume 1. Chicago, IL: Association of College and Research Libraries, pp. 41-51.
Gaudry, A.J.P. (2011). Insurgent research. Wicazo Sa Review, 26(1), 113-136. doi: 10.5749/wicazosareview.26.1.0113
Giroux, H., & Cohen, R. (2014). Neoliberalism’s war on higher education. Chicago, IL: Haymarket Books.
Halpern, R. Eaker, C., Jackson, J. & Bouquin, D. (March 2015). #Ditchthesurvey: Expanding methodological diversity in LIS research. In the Library with the Lead Pipe. Retrieved from: https://www.inthelibrarywiththeleadpipe.org/2015/ditchthesurvey-expanding-methodological-diversity-in-lis-research/.
Harvey, D. (2005). A brief history of neoliberalism. Oxford ; New York: Oxford University Press.
Heiser, C., Prince, C., & Levy, J. D. (2017). Examining Critical Theory as a Framework to Advance Equity Through Student Affairs Assessment. The Journal of Student Affairs Inquiry, 2(1), 1-15.
Hinchliffe, L.J. (2016). Sensemaking for decisionmaking [keynote address]. 2016 Library Assessment Conference. Arlington, VA.
Hudson, D. J. (2017). On “diversity” as anti-racism in library and information studies: A critique. Journal of Critical Library and Information Studies, 1(1). Retrieved from http://libraryjuicepress.com/journals/index.php/jclis/article/view/6.
Hufford, J. R. (2013). A Review of the Literature on Assessment in Academic and Research Libraries, 2005 to August 2011. Portal: Libraries and the Academy, 13(1), 5-35.
Hurtado, S. (2015). The transformative paradigm: principles and challenges. In Martínez-Alemán, A., Pusser, B, & Bensimon, E.M. (Eds.) Critical approaches to the study of higher education: A practical introduction. Baltimore, MD: Johns Hopkins University Press, pp. 285-307.
Johnson, J. A. (2018). Toward information justice: Technology, politics, and policy for data in higher education administration. San Antonio, TX: Springer.
Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263-274.
Lanclos, D., & Asher, A. D. (2016). ‘Ethnographish’: The state of ethnography in libraries. Weave: Journal of Library User Experience, 1(5). Retrieved from: https://quod.lib.umich.edu/w/weave/12535642.0001.503?view=text;rgn=main.
Larose, K. and Barron, S. (2017). How white is your UX practice?: inclusion and diversity in critical UX research. User Experience in Libraries: Yearbook 2017. CreateSpace Independent Publishing Platform, 23-33.
Lilburn, J. (2017). Ideology and audit culture: standardized service quality surveys in the academic library. Portal: Libraries and the Academy, 17(1), 91-110.
Lupi, G. (2017). Data humanism: the revolutionary future of data visualization. PrintMag, 30 Jan 2017. Retrieved from: http://www.printmag.com/information-design/data-humanism-future-of-data-visualization/.
McArthur, J. (2016). Assessment for social justice: the role of assessment in achieving social justice. Assessment & Evaluation in Higher Education, 41(7), 967-981.
Miner, K., Jayaratne, T., Pesonen, A. & Zurbrügg, L. (2012). Using survey research as a quantitative method for feminist social change. In Hesse-Biber, S.N. (ed.) Handbook of feminist research: Theory and praxis (pp. 237-263). Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9781483384740
Nicholson, K. P. (2017). The “Value Agenda”: Negotiating a path between compliance and critical practice [keynote address]. Canadian Libraries Assessment Workshop (CLAW). University of Victoria, October 26, 2017.
Nicholson, K. P. (2015). The McDonaldization of academic libraries and the values of transformational change. College & Research Libraries ACRL, 76(3), 328-338.
Porter, S. R., & Whitcomb, M. E. (2005). Non-response in student surveys: The role of demographics, engagement and personality. Research in higher education, 46(2), 127-152.
Quinn, B. (2000). The McDonaldization of academic libraries? College & Research Libraries, 61(3), 248-261.
Revitt, E. (2016). Putting the who in the Canadian academic librarian community: CAPAL Census. Open Shelf. Retrieved from http://open-shelf.ca/161001-ocula-capal-academic-librarian-census/. Accessed: 8 August 2018.
Revitt, E., Shrader, A., & Kaufman, A. (2016). 2016 Census of Canadian Academic Librarians User Guide and Results Summary. Canadian Association of Professional Academic Librarians. Retrieved from https://capalibrarians.org/wp/wp-content/uploads/2016/12/Census_summary_and_user_guide_December_16_2016.pdf. Accessed: 30 September 2018.
Sanoff, H. (2014). Participatory design programming. In Coghlan, D., & Brydon-Miller, M. (eds). The SAGE encyclopedia of action research (Vols. 1-2). London, UK: SAGE Publications Ltd. doi: 10.4135/9781446294406.
Saunders, D. B. (2015). They do not buy it: exploring the extent to which entering first-year students view themselves as customers. Journal of Marketing for Higher Education, 25(1), 5-28.
Saunders, D. B. (2010). Neoliberal Ideology and Public Higher Education in the United States. Journal for Critical Education Policy Studies, 8(1), 41-77.
Seale, M. (2013). The neoliberal library. In Information literacy and social justice: Radical professional praxis, L. Gregory & S. Higgins, Eds. (pp. 39-61). Sacramento, CA: Library Juice Press.
Stage, F. K. (2007). Answering critical questions using quantitative data. New Directions for Institutional Research, 133, 5-16.
Stage, F. K., & Wells, R. S. (2014). Critical quantitative inquiry in context. New Directions for Institutional Research, 158, 1-7.
Statistics Canada. (2016). Visible minority of person. Government of Canada. Retrieved from http://www23.statcan.gc.ca/imdb/p3Var.pl?Function=DECI&Id=257515.
Thomas, J. (2010). Toward a critical ethnography: A re-examination of the Chicago legacy. In Atkinson, P., & Delamont, S. (Eds.), SAGE qualitative research methods (pp. 478-490). Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9780857028211.
Thomas, J. (1993). Doing critical ethnography. Newbury Park, CA: SAGE Publications Ltd. doi: 10.4135/9781412983945.
Tuhiwai Smith, L. (1999). Decolonizing methodologies: Research and indigenous peoples. New York, NY: Palgrave.
Wall, A. F., Hursh, D., & Rodgers III, J. W. (2014). Assessment for Whom: Repositioning Higher Education Assessment as an Ethical and Value-Focused Social Practice. Research & Practice in Assessment, 9, 5-17.
Wolff-Eisenberg, C., & Braddlee. (2018). Amplifying student voices: The Community College Libraries and Academic Support for Student Success project. Ithaka S+R. Retrieved from http://www.sr.ithaka.org/wp-content/uploads/2018/08/SR_Report_Amplifying_Student_Voices_CCLASS-_08132018.pdf.
Young, S., & Brownotter, C. (2018). Towards a more just library: Participatory design with Native American students. Weave: Journal of Library User Experience, 1(9). Retrieved from: https://osf.io/preprints/lissa/7jmtg/.
- Harvey summarizes neoliberalism as “a theory of political economic practices that proposes that human well-being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets, and free trade” (2005, p. 2). In the context of higher education, this manifests itself in a focus on revenue generation and efficiency (Saunders, 2010, p. 43) and is underpinned by an emphasis on “market-like competition—among institutions, scientists, scholars, and students” (Busch, 2017, p. 1), with a concomitant focus on audits, rankings, and quantitative performance measurements such as student test scores or metrics of faculty productivity (Busch, 2017; Denzin & Giardina, 2017). As Giroux and others note, this view of higher education positions students as consumers or customers and faculty as “providers of a saleable commodity such as a credential or a set of workplace skills” (Giroux & Cohen, 2014, p. 16; Saunders, 2010 & 2015). [↩]
- Canadian Association of Professional Academic Librarians/Association Canadienne des bibliothécaires académiques professionnels [↩]
- See http://www23.statcan.gc.ca/imdb/p3Var.pl?Function=DECI&Id=257515 [↩]
- See Crown-Indigenous Relations and Northern Affairs Canada, 2017 [↩]
- For more information on the census as a whole – including consultation and testing during its creation and its underlying goals of advocacy and policy development – see Revitt, 2016 and Revitt, Schrader, & Kaufman, 2016. [↩]