Beatty, John, Citation Databases for Legal Scholarship: Ranking the Top 28 Law Faculties (June 16, 2021). University at Buffalo School of Law Legal Studies Research Paper No. 2020-018, 2021 Yale Conference on Citation and the Law, Available at SSRN: https://ssrn.com/abstract=3868516
“The 2019 announcement by U.S. News that it would introduce a new law faculty ranking based on legal scholarship citation data from HeinOnline was met with concern by the legal academy. One of the criticisms of the proposed ranking is the choice of HeinOnline as the citation data source. HeinOnline was criticized both for inaccuracies and because its citation data is limited to the legal journals that it carries. This study examines the effects of the data source on metrics and rankings by using three sources of legal scholarship citation data: Google Scholar, Westlaw, and HeinOnline. It compares six years of citations to works by all of the tenured and tenure-track members of the top twenty-eight faculties as determined by recent citation studies by Sisk and Heald and Sichelman. Rankings generated using Sisk’s method, originally developed by Leiter, on the data from the three sources showed moderate to high correlation (0.77 to 0.96) to each other. Total citations and total publications for each faculty were moderately to highly correlated to rankings using this method. Faculty size showed low to moderate correlation. Citations-per-faculty member showed very high correlation (0.98 to 0.99) to all three sources. The main differences between HeinOnline and Westlaw were due to the possibility of obtaining citations to works outside of Westlaw’s database, especially books. Inclusion of citations to outside works would improve the HeinOnline results and bring them closer to the Westlaw results. Differences between both legal databases and Google Scholar were largely the result of the inability to separate citations to multiple editions of a book and the inclusion of several highly-cited articles in medical, economic, or political science journals. Use of normalized or scaled indicators would be a helpful method to include citations to these interdisciplinary works while not allowing their much-larger numbers of citations to skew the rankings. Because citations-per-faculty member is such a strong driver of the Leiter method, a school could game the rankings by buying out or otherwise moving low-cited or unproductive faculty, thereby reducing the number of faculty and increasing citations-per-faculty member. Use of metrics like the h-index, which only takes high-cited papers into account, or other composite metrics, would reduce the opportunity for gaming in this manner.”