Research impact is loosely defined as how broadly scholarly research is being read, discussed, and used both inside and outside of the academy. This guide is designed to help researchers and students better understand how research impact is currently measured, outline Duke’s resources for assessing impact, and what improvements to the current systems of measurement have been proposed.

While all of these tools can be revealing in helpful ways, they are also all susceptible to misinterpretation and misuse. No one metric or tracking tool is an unalloyed good or not all of them are equally useful in all disciplines. However, researchers can explore the different options available, the pros and cons of each, how to assess them critically, and how to use them to the benefit of their own careers and the careers of younger researchers they are mentoring.

Current Tools for Assessing Research Impact

  1. Impact factor is a measurement of the citation rate of a particular journal. Individuals do not have an impact factor, publications do. Impact factor takes into account citations of the previous two years of published articles. Because citation practices vary widely between disciplines, impact factors also vary. While the number could differ depending on the tool used to measure the number of citations, Thomson Reuter’s Journal Citation Reports tool is generally the industry standard for measuring impact factor.


  1. The Eigenfactor is similar to impact factor in that it attempts to measure the “total importance” of a journal. It considers citations five years from the year of publication (more than the two of JCR’s impact factor). It also weighs citations from more influential journals higher than those from less influential journals. The eigenfactors of all the journals in the eigenfactor index are scaled to sum to 100, so that a journal with a 1.00 eigenfactor has 1% of the “total importance” of all the journals in the index.


  1. The h-index is a measurement of an individual researcher’s impact. The index takes into consideration the number of citations of a researcher’s publications (i.e. a researchers with an index of h has published h papers which have been cited h at least times). The h-index was proposed in 2005 by physicist Jorge Hirsch and is alternately called the “Hirsch index” or “Hirsch number.” A number of services calculate h-index and the number can vary depending on the service used.

The Duke Medical Center Library has a comprehensive guide to individual author impact available here.


  1. Article-level metrics: Instead of attempting to measure journals or individuals, article-level measure the impact of individual articles. PLoS pioneered this approach, and they measure usage (page views, downloads), citations (using Scopus, Web of Science, PMC, etc. data), and social networking mentions. Other publishers—primarily big, commercial, scientific publishers—have begun to use article-level metrics as well, often utilizing the donut on their publisher site. These new article-level “altmetrics” make it easier to see the context of the impact. In other words, they show not only how many times an article was cited, but who was doing the citing (including in places other than scholarly literature) and what they said. Some of these metrics can indicate how the research is being used (i.e. how many times it has been downloaded, from where and by whom, and whether they are saving it for their own future reference, whether libraries are buying a copy of a book, and other indicators of value and impact). There is a guide to altmetrics and using them in your CV available here.

SPARC has a thorough primer on this topic


  1. Book citations are much less mature than any kind of metrics of journal articles. Thomson Reuters offers a book citation tool, but it is less standardized that measurements of the impact of journal articles. Researchers may use Google Scholar, Web of Science, or Scopus, considering them reliable, but they are not often used to assess impact in book-heavy disciplines.




Duke Resources for Assessing Research Impact (alphabetical)

DukeSpace downloads

The DukeSpace repository [] serves as a place to archive and disseminate open access versions of faculty and student publications, data, and other scholarly outputs. The repository is indexed by Google and other major search engines, making the items archived findable outside of Duke and library websites. It is the primary mechanism for delivery of theses and dissertations by Duke students, and articles by faculty authors made available under Duke’s Open Access Policy.

For every item archived in DukeSpace, the repository records the number of times the record’s webpage has been visited as well as how many times the item itself (PDF, data, etc.) has been downloaded. These download reports can show faculty and students that their work is actively being used. The reports include the countries and cities from which users accessed their articles, which can demonstrate the global research of Duke research. To see these statistics for any given item or category in DukeSpace, look for the “View Statistics” link in the sidebar.

Impactstory (individual subscription)

Impactstory is a non-profit, web-based, open-source tool that is designed to help scholars discover and examine how their research outputs are having an impact both within their fields and outside of them. Impactstory collects data on “traditional” research metrics such as citation counts and also altmetrics such as mentions on blog posts and social media. It can be integrated with ORCID, SlideShare, GitHub, Google Scholar, and FigShare, all of which can help make authors’ work more discoverable on the open web.

Impactstory is a subscription service that allows authors to access its data for $60 per year, though authors can try it free of charge for 30 days.

Article level metrics from publishers

Article level metrics are a way of measuring the attention and impact of particular scholarly articles, which can provide more useful information than more generalized metrics that aggregate information for a journal overall. By displaying how many times the article has been downloaded, mentioned, or cited, they allow both the author and readers of the article to see how it has been received and referenced. Article level metrics are generally updated on a daily or weekly basis, so provide a quicker sense of the impact of an article than citations alone, which take much longer to play out.

These kinds of metrics are increasingly being displayed directly in the article page in journals, most notably PLOS [] journals, but also:

For more information about article level metrics, see this primer from SPARC:

Google Scholar

Google Scholar provides citation counts for article-level metrics. Under ‘Metrics’ it also provides impact factor information for individual journals, such as the h-index, h-core, h-median for journals. The h5 metrics provide metrics info for publications in the last 5 years.

NB: Google Scholars citation counts should be taken with a big grain of salt because the quality control on the items pulled into the search is dubious (inflated citation counts, phantom citations, and poor metadata). More reliable and consistent bibliometric searching can be found in Web of Science.

An excellent comparison of various citation and metric features for Scopus, Web of Science, and Google Scholar can be found here:

Web of Science tools

 Book Citation Index (Web of Science): Indexes 50,000+ selected books from 2005 to present. Book Citation Index is Included as a default search in Web of Science Core Collection. It can also be searched separately under “More Settings.” BCI records provide links to related citations.

Data Citation Index (Web of Science): Indexes multidisciplinary datasets and data studies — with particular strengths in life sciences data (48%). Provides abstracts and contextual linking to articles citing the data.

Journal Citation Reports (JCR, Web of Science): provides metrics individual journals based on citation data for: total citations, impact factor, Eigenfactor scores, and various other indicators.


Similar to Web of Science, Elsevier’s Scopus is a multidisciplinary citation index with particular strengths in current life and medical science research literature published after 1996. Scopus indexes 57 million records and provides powerful citation-tracking features. Author search provides h-index info and “Analyze Author Info” page with information about all publications associated with a particular author.


WorldCat is a good place to learn the library holdings for books. Though it won’t tell you anything about whether books have been used, it will at least give an idea of just how many libraries have copies of a book, which should give some idea of the reach of a particular title.

WorldCat Identities ( is a tool that draws data from You can do an author search and find out information like the most widely held works, number of editions, translations, associated subjects, and more. There’s also a publication timeline that features both books by and about the author.


Proposed Changes to Understanding Research Impact

The San Francisco Declaration on Research Assessment was published in 2013 and it “calls for placing less emphasis on publication metrics and becoming more inclusive of non-article outputs.” DORA places specific emphasis on how flawed the journal impact factor is, despite its widespread use.

DORA calls for academic institutions to pledge to:

  • Establish new criteria for hiring, tenure, and promotion that emphasize the quality of the research content “rather than the venue of publication.”
  • Consider other research outputs beyond journal publications so that promotion and funding decisions are no longer made based on citations and quantitative metrics alone.
  • Be aware of the different types of metrics and their strengths and weakness.

Publishers and metrics providers are encouraged to:

  • De-emphasize the importance of impact factor in their marketing and explain that it is only one method of assessing research impact.
  • Be transparent about their data collection methods.



The Leiden Manifesto is a document published in 2015 that provides a list of ten major principles for changing the way research is assessed:

  • Quantitative evaluation should support qualitative, expert assessment.
  • Measure performance against the research missions of the institution, group, or researcher.
  • Protect excellence in locally relevant research.
  • Keep data collection and analytical processes [about research] open, transparent, and simple.
  • Allow those evaluated to verify data and analysis [about their work].
  • Account for variation by field in publication and citation practices.
  • Base assessment of individual researchers [at an institutional level] on a qualitative judgement of their portfolio.
  • Avoid misplaced concreteness and false precision [of quantitative methods of data collection].
  • Recognize the systemic effects of assessment and indicators and preference a “suite of indicators” over a single metric such as journal impact factor.
  • Scrutinize indicators regularly and update them to reflect changing research ecosystems.

For the full manifesto, see:


If you have questions about research metrics and how to assess the impact of your scholarly work, contact: