One of the most exciting technical and potentially socially transformative developments of recent years is the development of decentralized versions of formerly centralized institutions. The most obvious example is Bitcoin, which offers something like digital cash without a centralized trust-conferring party such as a central bank, enabling one to transact and store value without being subject to the whims of a third party whose incentives may not align with yours, or whose competence you may not trust.
Vast swathes of academia depend on such centralized parties, because in order to be a successful academic, one must publish peer-reviewed papers, and the responsibility for maintaining the peer review system is the responsibility of journal editors, who are centralized trust-conferring agents. And there’s very good reason to think that editors aren’t, in the 21st century, up to the task.
To see this, let me explain the workflow, and the problems we face, at the academic journal at which I work in an editorial capacity. It can be simply put: authors submit papers to journals, whose editors seek reviewers to assess their quality. If the reviewers like the paper, it gets published in the journal. Authors submit papers because they need them for career advancement; reviewers review them because they in turn rely or did rely on reviews for career advancement; editors manage the whole thing either for a CV line, prestige, or money (authors and reviewers don’t get paid).
The core value of the system comes from reviews. An issue of a journal is only reliable to the extent that the reviewers of the papers it contains accurately assessed the papers submitted to it. And that this value is maximized is the responsibility of editors.
But there are very good reasons to think that having editors assign reviewers to papers is not the best way to match papers with reviewers. For one, there’s a local knowledge problem: editors’ knowledge of authors is sharply limited. They don’t know everyone who works on a topic; they don’t know who has just reviewed five papers for other journals; they don’t know who has just gone on childcare leave or got sick. This lack of knowledge means that an editor often has to invite many people before getting someone to agree to review.
In my view, this is a big problem. Editors’ ignorance increases the time it takes to assess the paper, often to an extreme extent; authors suffer from these long waits, and so often don’t prioritize reviewing, and the system falls into a bad convention whereby everybody takes a long time to review because everybody else does, because the system has previously revealed itself to be slow to them, because editors’ ignorance creates a bottleneck in the workflow.
Here’s a second problem: the current system doesn’t scale. Because more PhDs are granted each year than jobs become available, there’s a race to the bottom whereby everybody submits more to try to edge others out, and the number of submissions becomes completely unmanageable. At my journal, which is around 50 years old, the number of submissions increased by about 35% just in the last year.
So that’s two reasons to want to try to do without editors. But it seems we need editors. The reason for this is anonymity. An author and a reviewer must be mutually unknown to prevent collusion or other bad behaviours (rejecting someone you dislike; forming grudges against someone who rejects you). The editor functions as a sort of intermediary that brings authors and reviewers in communication without revealing the identity of each to either. The question then becomes: can we have anonymous matching without editorship?
I think we can. To begin to see this, note that it’s often sufficient, to review a paper, that one have published at least one article in a good journal on the topic of the paper. Let’s go further and assume that it is necessary and sufficient. In fact, let’s go even further and assume that it’s necessary and sufficient to review a paper on a topic that a paper one has written appear in that paper’s bibliography.
These might seem like somewhat extreme idealizations and that’s partly true. But they also apply in many cases, and, more importantly, relaxing these idealizations doesn’t seem to me insuperable (we develop some sort of way of assessing similarity between papers such that it’s necessary and sufficient to review a paper that one have published one similar to one occurring in its bibliography; in the age of resources like Google Scholar and Microsoft Academic, this doesn’t seem too infeasible).
Now here’s the basic idea: each person in the system gets assigned a pseudonymous identity that encodes their publication history (some function determines it based on the DOIs of their published papers in conjunction with a private code sent by email to the address listed on the published papers that can serve to verify that only the papers’ other can claim the identity).
A message in the system consists of a paper, conceived of as a bibliography and a body text (the latter perhaps encoded so that only some people can see it), the author’s pseudonymous identity, and a destination, of which there are three: For Consideration, Accept, Reject. Anyone can make a message and send it to For Consideration; that is submitting a paper in this system. But only some people can send a message (i.e. a paper) in the For Consideration destination to the Accept or Reject destinations, and in order to do so, there must be a match between the message and the person’s identity — roughly, the person’s identifier (i.e. the representation of their publication history) must encode a paper that is to be found in the representation of the paper’s bibliography in the message. This is roughly like signing a transaction in bitcoin with a digital signature determined by one’s private key — that’s the core analogy. In a similar vein, there must be no way of determining on the basis of a pseudonymous identity the person’s publication record or their real world identity, and there must be a way for all participants to verify, of a message sent to Accept or Reject, that it was sent by someone with the right credentials, again without revealing those credentials. My (tenuous!) understanding of cryptography makes me think this might be possible (perhaps using ring signatures where the relevant group is all of the potential reviewers of a paper; see this post of mine which goes into this idea in more detail), but these are ultimately questions for someone with a different skill set than mine.
Added 23 June 2020: From summer 2020 I’m going to move my occasional writing from medium to tinyletter. If you want to read more from me in your inbox, please consider signing up: https://tinyletter.com/mittmattmutt. I’ll post relatively infrequently, and hopefully interestingly, on the same sort of themes as the blog, so: popular philosophy/explainers, culture, literature, politics/economics, etc. I might also do things like brief reviews of books I read and so on.
Details need to be worked out (we need a way for people to publicly claim both accepted papers and completed reviews in a way that doesn’t reveal the real people behind the pseudonyms, but this seems doable). But if something like this could work, it would have a range of neat and desirable properties. Basically what we’re doing is extracting the valuable things from the journal system — papers assessed by experts — and discarding the rest, most notably the knowledge- and time- poor editors, and enabling reviewers and papers to find each other by having their suitability for each other built into how they’re represented in the system. In the current model, the bona fides of a paper are determined by the journal it appears in. But that’s an inaccurate representation, arguably, of how it should be determined, which is by the quality of the reviewer who decided it ought to be published. If we could represent this information directly, but also in a way that didn’t reveal the reviewer’s real identity, we could have a more accurate representation of the quality of a paper.
It also reduces editorial malfeasance: an editor can’t simply accept a paper without reviewing it properly, for example. For a paper to be accredited in this system, it has to be signed off on by a qualified reviewer — that is literally the only way it can happen.
It could also help remove free riders from the reviewing pool (in my experience, this is not a big problem. I think most academics are sufficiently conscientious that if anything they probably review more than they ought. But it’s at least formally a place where the system can be exploited). In the current system, because reviewing is anonymous, and because there are many journals, one can make a career by publishing many papers and refusing to perform many reviews; journal editors don’t communicate that information, so your bad behaviour won’t become common knowledge. But imagine we had a way to mark how many papers (and perhaps how quickly) a given pseudonymously-identified reviewer reviewed. Then logic would suggest, when we’re looking for papers to review to discharge our refereeing duty, that we pick such an author, because we can rely on that author, based on their track record, to continue to perform the reviews needed to keep the system going. Free riders, who submit without reviewing, would find themselves struggling to find rational people agree to review their papers, and would fall out of the system.
Ask anyone working in academia and they will tell you their life would be improved by a better system of peer review. And plausibly the very progress of science itself would be improved (a general issue a reader of this might have is that with the reliance on preprint servers we’ve already moved beyond peer review in many sciences; fair enough, but there are still many disciplines in which this model of anonymous peer review holds sway). So it’s an important problem to solve, and I think something like the decentralized model presented here, once worked out, is deserving of our attention as a possible solution.