It's The End Of Citation As We Know It & I Feel Fine
from the the-ai-can-free-us dept
Legal scholarship sucks. It’s interminably long. It’s relentlessly boring. And it’s confusingly esoteric. But the worst thing about legal scholarship is the footnotes. Every sentence gets one1. Banal statement of historical fact? Footnote. Recitation of hornbook law? Footnote. General observation about scholarly consensus? Footnote. Original observation? Footnote as well, I guess.
It’s a mess. In theory, legal scholarship should be free as a bird. After all, it’s one of the only academic disciplines to have avoided peer review. But in practice, it’s every bit as formalistic as any other academic discipline, just in a slightly different way. You can check out of Hotel Academia, but you can’t leave.
Most academic disciplines use peer review to evaluate the quality of articles submitted for publication. In a nutshell, anonymous scholars working in the same area read the article and decide whether it’s good enough to publish. Sounds great, except for the fact that the people reviewing an article have a slew of perverse incentives. After all, what if the article makes arguments you dislike? Even worse, what if it criticizes you? And if you are going to recommend publication, why not insist on citations to your own work? After all, it’s obviously relevant and important.
But the problems with peer review run even deeper. For better or worse, it does a pretty good job of ensuring that articles don’t jump the shark and conform to the conventional wisdom of the discipline. Of course, conformity can be a virtue. But it can also help camouflage flaws. Peer review is good at catching outliers, but not so good at catching liars. As documented by websites like Retraction Watch, plenty of scholars have sailed through the peer review process by just fabricating data to support appealing conclusions. Diederik Stapel, eat your heart out!
Anyway, legal scholarship is an outlier, because there’s no peer review. Of course, it still has gatekeepers. But unusually, the people deciding which articles to publish are students, not professors. Why? Historical accident. Law was a profession long before it became an academic discipline, and law schools are a relatively recent invention. Law students invented the law review in the late 19th century, and legal scholars just ran with it.
Asking law students to evaluate the quality of legal scholarship and decide what to publish isn’t ideal. They don’t know anything about legal scholarship. They don’t even know all that much about the law yet. But they aren’t stupid! After all, they’re in law school. So they rely on heuristics to help them decide what to publish. One important heuristic is prestige. The more impressive the author’s credentials, the more promising the article. Or at least, chasing prestige is always a safe choice, a lesson well-observed by many practicing lawyers as well.
Another key heuristic is footnotes. Indeed, footnotes are almost the raison d’etre of legal scholarship. An article with no footnotes is a non-starter. An article with only a few footnotes is suspect. But an article with a whole slew of footnotes is enticing, especially if they’re already properly Bluebooked. After all, much of the labor of the law review editor is checking footnotes, correcting footnotes, adding footnotes, and adding to footnotes. So many footnotes!
Most law review articles have hundreds of footnotes. Indeed, the footnotes often overwhelm the text. It’s not uncommon for law review articles to have entire pages that consist of nothing but a footnote.
It’s a struggle. Footnotes can be immensely helpful. They bolster the author’s credibility by signaling expertise and point readers to useful sources of additional information. What’s more, they implicitly endorse the scholarship they cite and elevate the profile of its author. Every citation matters, every citation is good. But how to know what to cite? And even more vexing, how to know when a citation is missing? So much scholarship gets published, it’s impossible to read it all, let alone remember what you’ve read. It’s easy to miss or forget something relevant and important. Legal scholars tend to cite anything that comes to mind and hope for the best.
There’s gotta be a better way. Thankfully, in 2020, Rob Anderson and Trent Wenzel created ScholarSift, a computer program that uses machine learning to analyze legal scholarship and identify the most relevant articles. Anderson is a law professor at Pepperdine University Caruso School of Law and Wenzel is a software developer. They teamed up to produce a platform intended to make legal scholarship more efficient. Essentially, ScholarSift tells authors which articles they should be citing, and tells editors whether an article is novel.
It works really well. As far as I can tell, ScholarSift is kind of like Turnitin in reverse. It compares the text of a law review article to a huge database of law review articles and tells you which ones are similar. Unsurprisingly, it turns out that machine learning is really good at identifying relevant scholarship. And ScholarSift seems to do a better job at identifying relevant scholarship than pricey legacy platforms like Westlaw and Lexis.
One of the many cool things about ScholarSift is its potential to make legal scholarship more equitable. In legal scholarship, as everywhere, fame begets fame. All too often, fame means the usual suspects get all the attention, and it’s a struggle for marginalized scholars to get the attention they deserve. Unlike other kinds of machine learning programs, which seem almost designed to reinforce unfortunate prejudices, ScholarSift seems to do the opposite, highlighting authors who might otherwise be overlooked. That’s important and valuable. I think Anderson and Wenzel are on to something, and I agree that ScholarSift could improve citation practices in legal scholarship.
But I also wonder whether the implications of ScholarSift are even more radical than they imagine? The primary point of footnotes is to identify relevant sources that readers will find helpful. That’s great. And yet, it can also be overwhelming. Often, people would rather just read the article, and ignore the sources, which can become distracting, even overwhelming. Anderson and Wenzel argue that ScholarSift can tell authors which articles to cite. I wonder if it couldn’t also make citations pointless. After all, readers can use ScholarSift, just as well as authors.
Maybe ScholarSift could free legal scholarship from the burden of oppressive footnotes? Why bother including a litany of relevant sources when a computer program can generate it automatically? Maybe legal scholarship could adopt a new norm in which authors only cite works a computer wouldn’t flag as relevant? Apparently, it’s still possible. I recently published an essay titled “Deodand.” I’m told that ScholarSift generated no suggestions about what it should cite. But I still thought of some. The citation is dead; long live the citation.
Brian L. Frye is Spears-Gilbert Professor of Law at the University of Kentucky College of Law
1. Orin S. Kerr, A Theory of Law, 16 Green Bag 2d 111 (2012). (“It is a common practice among law review editors to demand that authors support every claim with a citation. These demands can cause major headaches for legal scholars. Some claims are so obvious or obscure that they have not been made before. Other claims are made up or false, making them more difficult to support using references to the existing literature.”).↩
Filed Under: ai, citations, legal scholarship