apr14_b.indd April 2014 191 C&RL News Academic institutions increasingly rely on bibliometric measures to assess the qual- ity of their faculty’s research output. Tenure and promotion deliberations, as well as fund- ing decisions, often invoke the Journal Impact Factor (JIF), the h-index, citation counts, as well as a growing array of newly developed performance metrics. Similarly, funding agencies use available bibliometric measures to identify grant-worthy research proposals and ongoing projects. While the proponents of metrics insist they impartially capture the quality of research, their opponents point out that these parameters are not reliable tools for evaluating scholarly output. In the past few years, the ongoing debate on metrics-driven research assessment has gained momentum. In particular, the JIF—the most influential metric by far—has come un- der fire in the scientific community. Perhaps the most vocal and cross-disciplinary critique was formulated during the December 2012 meeting of the American Society for Cell Biology (ASCB). The critique and manifesto have become known as DORA: San Francisco Declaration on Research Assessment.1 The DORA declaration calls for placing less em- phasis on publication metrics and becoming more inclusive of non-article outputs. The 82 original organization signatories for DORA included ASCB and other scientific societies from around the world. Editorial boards of well-known journals, prestigious research institutes and foundations, and providers of new metrics (Altmetric LLP and Impactstory, both promoting the use of altmetrics) lent their support, as well. As of late January 2014, DORA has more than 400 supporters among organizations, and more than 10,000 individuals signed the declara- tion.2 Since DORA was issued, its critique and recommendations have been discussed in scientific journals3 and blogs,4 and on aca- demic portals such as The Chronicle of Higher Education.5 The debates around research assessment have even been brought to the general public’s attention in The Guardian6 and, most recently, The Atlantic.7 DORA’s call for new and improved re- search assessment tools singles out the JIF as the deeply flawed—yet disproportionally important—journal-based metric that has come to dominate decisions about hiring, promotion, and funding. Eugene Garfield, the scientist who formulated the algorithm for calculating the JIF, started to explore the idea in 1955.8 The formula was applied to determine which journals should be included in the first Science Citation Index published by the Institute for Scientific Information for the year 1961.9 Thomson Reuters, the com- pany that has been issuing the Journal Cita- tion Reports (an annual ranking of journals) since 1975, calculates the JIF by dividing the number of citations made in a given year to Marta Bladek DORA San Francisco Declaration on Research Assessment (May 2013) Marta Bladek is freshman and instruction services librarian at John Jay College of Criminal Justice, City of University of New York, e-mail: mbladek@jjay.cuny.edu Contact series editors Zach Coble, digital scholarship specialist at New York University, and Adrian Ho, director of digital scholarship at the University of Kentucky Libraries, at crlnscholcomm@gmail.com with article ideas © 2014 Marta Bladek scholarly communication C&RL News April 2014 192 items published in a journal in the previous two years by the total numbers of articles and reviews published in the previous two years.10 The formula, then, measures how many times an article from the journal has been cited on average in a given year. Although the impact factor was originally meant to identify influential journals only, with time, it has come to be interpreted as a measure of author and article impact, as well. A researcher’s tenure and promotion often depend on his or her publication metrics. Similarly, grant applicants are under pressure to demonstrate their scientific productivity by publishing their work in high-impact journals. Critiquing the unwarranted reliance on the JIF as an indicator of an article’s or researcher’s importance, the proponents of alternative methods of research assessment point to further characteristics that make the JIF an inadequate evaluation tool. DORA briefly lists a few of them, but they merit a closer look. To start, it has been established that the distribution of citations is deeply skewed, a phenomenon reflected in the 80/20 rule: just 20% of articles receive 80% of the citations.11 In other words, the JIF is not representative of the impact of individual articles; an article published in a high-impact journal shouldn’t automatically be assumed to be of great im- portance or quality. Moreover, the JIF differs greatly from one field to another, a fact that makes cross-disci- plinary evaluations moot.12 For example, the 2004 weighted impact factor for mathematics journals was 0.56; for molecular and cell biol- ogy it was eight times as high, 4.76.13 These differences have to do with varied citation practices across fields, discipline-dependent lag times between publication and citation, as well as the discipline-specific number of citations an average article includes.14 Further- more, the JIF lends itself to manipulation, a weakness of which numerous journals have taken advantage. To inflate their ranking, jour- nals may resort to practices, such as coercive self-citation, where authors are pressured to include citations to the journals in which their article is to be published.15 The release of the Journal Citation Reports for the year 2012 was accompanied by a list of 65 titles suppressed for “anomalous citation patterns resulting in a significant distortion of the Journal Impact Factor, so that the rank does not accurately reflect the journal’s citation performance in the literature.”16 The gaming of the JIF may be monitored but not prevented. In light of the above limitations, DORA puts forward a set of recommendations for the scientific community. To decrease their reliance on journal-based metrics, DORA asks that members of the scientific community commit to reformulating their definitions of research quality. As set by academic institu- tions, criteria for hiring, tenure, and promotion should stress the content rather than the venue of publication. Additionally, institutions and funding agencies alike are urged to consider research outputs other than articles; if varied forms of research output were considered, the measurement of research impact would no longer be confined to publication and citation metrics. Publishers, in turn, must take action to minimize the prominence of the JIF. It should not be emphasized in journal marketing, or, if used nevertheless, the JIF should be presented as merely one of many available journal-based metrics. Moreover, articles should not be sub- ject to reference number limits and authors should be required to cite primary research rather than reviews. Metrics providers have a role to play, as well. They should make their data and meth- ods transparent and available to the public. Their organizations should also be vocal about and discourage the abuse and manipulation of metrics. Institutional and organizational efforts to move away from the reliance on publication-level metrics are not sufficient, DORA argues. As members of groups involved in hiring, tenure, promotion, and funding deci- sions, scholars should expose the limitations of journal-based metrics and advocate alternative methods of research assessment. As candidates for tenure or promotion and as applicants for funding, researchers should represent the qual- ity of their work through a range of metrics rather than rely on publication-level metrics April 2014 193 C&RL News alone. Furthermore, as authors, scholars should cite primary research over reviews to promote original scholarship. In light of the above limitations, DORA’s message only gains urgency. If “scientific output is [to be] measured accurately and evaluated wisely,” the current assessment practices must be modified and supplanted by new tools that account for—rather than over- look—the complexity and variety of research outputs.17 The overdependence on the JIF and other publication metrics, DORA signatories well realize, can be effectively challenged only through a concerted effort of the entire scientific community, including researchers, institutions, and funding agencies. The petition identifies the pitfalls of an uncritical reliance on existing assessment criteria and identifies steps that should be taken to lessen it; ulti- mately however, a shift in research evaluation methods will only take place if the scientific community takes actions and adopts tools other than journal-based metrics. Academic librarians are well positioned to promote DORA’s call to expand research assessment beyond the JIF. First, it is crucial that faculty and personnel committees are well informed about the caveats of bibliometrics. Accordingly, it is not enough that many librar- ies provide access to Thomson Reuters prod- ucts, such as the Web of Science and Journal Citation Reports. To encourage a judicious use of the metrics these and other databases collect, librarians should ensure information about their strengths and weaknesses is eas- ily available. The University of Michigan Library offers a useful example of how such a task can be ac- complished. A group of librarians put together a comprehensive and well-organized Citation Analysis Guide18 discussing the JIF and other measures in depth. My colleague Kathleen Collins and I created a similar guide for the John Jay College community.19 As DORA points out, however, being knowledgeable about the limitations of the JIF and other metrics is not enough. If assessment practices are to change, new tools need to be promoted. To that end, librarians may also endeavor to keep abreast of new develop- ments in the field. For example, we continue to update our guide with emerging assessment trends. Accordingly, our guide invites faculty to consider altmetrics and alerts them to ground- breaking initiatives, such as Faculty Media Im- pact Project.20 In addition to the online guide, we have disseminated information about the varied assessment tools through a variety of venues on campus. We offered workshops in the library, at the Center for the Advancement of Teaching, and in partnership with the Office of Institutional Research. All were well at- tended, and the participants assured us that the information presented was useful and helpful. With these and related kinds of initiatives, academic librarians can actively contribute to the debates around research assessment and further the cause of DORA. Notes 1. American Society for Cell Biology, “San Francisco declaration on research assess- ment,” http://am.ascb.org/dora/ (accessed January 31, 2014). 2. Ibid. 3. Alberts, “Impact factor distortions.” Colin Macilwain, “Halt the Avalanche of Performance Metrics,” Nature 500, no. 7462 (2013): 255. 4. Barbara Fister, “Library Babel Fish: End Robo-Research Assessment,” www. insidehighered.com/blogs/library-babel-fish /end-robo-research-assessment (accessed January 31, 2014). 5. Paul Basken, “Researchers and Scientific Groups Make New Push against Impact Fac- tors,” http://chronicle.com/article/Research- ersScientific/139337/ (accessed January 31, 2014). 6. Randy Schekman, “How Journals like Nature, Cell and Science Are Damaging Sci- ence,” www.theguardian.com/commentisfree /2013/dec/09/how-journals-nature-science -cell-damage-science (accessed January 31, 2014). 7. Haider Javed Warraich, “Impact Fac- tor and the Future of Medical Journals,” (continues on page 196) C&RL News April 2014 196 assisting patrons who push me to give advice not information.” Actions, not just positive comments, suggest that the momentum of the Money Matters initiatives will be sustained. Increas- ingly, branch managers and staff are seeking personal finance programming. In FY 13 the branch libraries hosted 360 financial educa- tion programs for more than 8,000 users. A new Single Stop benefits review office in one large hub library reported more than 600 clients; another hub is now weighing credit crisis counseling in languages other than Eng- lish. The young adult coordinator partnered with High Water Women, a trade group for women in hedge funds, to run after-school programs for teens. SIBL’s financial specialist is using the Money Matters curriculum to train a new cohort of staff. NYPL is committed to meeting its mandate as the 2012 recipient of the Malcolm S. Forbes Public Awareness Award for Excellence in Advancing Financial Understanding. Notes 1. For more on Money Smart Week see www. moneysmartweek.org. 2. Trevor A Dawes, “Libraries, ACRL, and financial literacy, helping students make sound decisions,” C&RL News, October 2013 http://crln.acrl.org/content/74/9/466.full. pdf+html (accessed January 22, 2014). 3. Money Matters PRO portal is at http:// bit.ly/NYPLMoneyMatters. www.theatlantic.com/health/archive/2014/01 /impact-factor-and-the-future-of-medical -journals/282763/ (accessed January 31, 2014). 8. Eugene Garfield, “The History and Meaning of the Journal Impact Factor,” JAMA: The Journal of the American Medical Associa- tion 295, no. 1 (2006): 90-93. 9. Ibid. 10. Thomson Reuters. “The Thomson Reuters Impact Factor.” n.d. http://wokinfo. com/essays/impact-factor/ (accessed January 31, 2014). 11. Garfield, “The History and Meaning of the Journal Impact Factor.” David A. Pendle- bury, “The Use and Misuse of Journal Metrics and Other Citation Indicators,” Archivum Immunologiae et Therapiae Experimentalis 57, no. 1 (2009): 1-11. 12. Som D. Jarwal, Andrew M. Brion, and Maxwell L. King, “Measuring Research Qual- ity Using the Journal Impact Factor, Citations and ‘Ranked Journals’: Blunt Instruments or Inspired Metrics?,” Journal of Higher Educa- tion Policy and Management 31, no. 4 (2009): 289-300. 13. Benjamin M. Althouse, Jevin D. West, Carl T. Bergstrom, and Theodore Bergstrom, “Differences in Impact Factor across Fields and over Time,” Journal of the American So- ciety for Information Science and Technology 60, no. 1 (2009): 27-34. 14. Ibid. Per O. Seglen, “Why the Impact Factor of Journals Should Not Be Used for Evaluating Research,” BMJ: British Medical Journal 314, no. 7079 (1997): 498. 15. Allen W. Wilhite and Eric A. Fong, “Coercive Citation in Academic Publishing,” Science 335, no. 6068 (2012): 542-543. 16. Thomson Reuters, “Journal Citation Re- ports Notices®.” Last modified September 27, 2013. http://admin-apps.webofknowledge. com/JCR/static_html/notices/notices.htm (ac- cessed January 31, 2014). 17. American Society for Cell Biology, “San Francisco declaration on research as- sessment.” 18. University of Michigan Library, “Cita- tion Analysis Guide,” last modified February 6, 2014, http://guides.lib.umich.edu/citation (accessed February 13, 2014). 19. Lloyd Sealy Library, “Faculty Scholar- ship Resources,” last modified November 26 2013, http://guides.lib.jjay.cuny.edu/citation (accessed February 13, 2014). 20. Center for a Public Anthropology, “Fac- ulty Media Impact Project,” http://facultyim- pact.publicanthropology.org/ n.d. (accessed February 13, 2014). (“DORA” cont. from page 193)