In some previous work (here, here) I’ve tried to think of ways of making peer review less reliant on editors, roughly because they are centralizing sources of slowness and ignorance. The following is another possible way of doing so, that in addition has the benefit of making peer review more transparent.
Introduction: The Slowness And Opacity of Academia
It takes a maximum of a day’s work (~7 or so hours) to review an academic paper, and often quite less. It takes upwards of 2000 hours (three months), on average, to assess an academic paper: to have it submitted, initially vetted, to find reviewers, for them to find time to review it, for the reviews to be consulted, for the editorial team to meet, and for the communication of the verdict. This disparity between the number of hours it takes an academic to assess a paper and the number of hours it takes for the system to produce a decision on a paper is bad for everybody, and we should try to improve it.
The livelihood of many academics and in some cases (such as medicine) the lives of normal people depend on peer review being a trustworthy system. But peer review systems are opaque: you submit the paper, hear nothing while the above process (you hope) whirs away, and eventually get an email. Nobody knows what quality the inputs to a given journal are: no one knows the quality of the reviewers, no one knows the quantity of false negatives (the rejected papers that should be accepted), and we have a tenuous sense of the false positives. This too is suboptimal.
Two problems, then: slowness, and lack of transparency. If we can fix those problems, we should. It would benefit academics proximally and humanity distally. I think we can fix them. I present a way to do so, which I call Open Inquiry, mainly to have a name to refer back to.
The Basic Idea
The basic idea is to have a given academic community as a whole, rather than a small coterie of editors, be responsible for finding reviewers and for vouching for their credibility, and thus for the running of the community’s journals. And the motivation for that is that, basically, is that many hands make light work, and decrease the likelihood of certain malign behaviours that can arise when power is concentrated in too few hands.
Hopefully, this should be intuitively obvious: if you present a paper to the whole academic community, it will be better able to suggest available and qualified referees for it than if, as under current circumstances, you present it to an editor or even a team of editors that seldom number more than a couple of dozen.
But it should be equally obvious why we don’t do this: because peer review places a number of anonymity constraints on authors, reviewers, and the academic community. The reason we can’t bring a paper to the attention of the whole community is because authors have the right to privacy concerning when and to whom they submit their papers.
In the same vein, an easy way to make reviews transparent would be to have reviewers simply sign them. But again that butts up against the constraints that neither author nor the community as a whole be able to know who reviewed which paper.
That is to say, the following two claims seem prima facie plausible:
- Opening peer review would lead to quicker and more transparent outcomes
- Opening peer review is made difficult by conventions to do with anonymity.
Faced with this, there are some obvious options. The traditional system in effect goes all in on the second bullet, foregoing the benefits offered by the first. Alternative systems like Open Peer Review go for the first and forego the second.
Open Inquiry seeks to maintain both, in only very slightly etiolated forms. We won’t have full transparency, and not quite full anonymity, but we will have plenty of transparency and plenty of anonymity.
Some Pretty Pictures
Take a look at the above diagram. It is meant to represent a centralized peer review system. The author submits to the editor, whose knowledge of possible reviewers is indicated with the thatched line. It perhaps underestimates the extent of that knowledge (personal connections, a paper’s bibliography, and google scholar can go a long way), but it is undeniably the case that for almost any paper almost any editor won’t know the whole available field.
Available is important here. People can be (un)available in hard to tell ways. Maybe they’re a just finished PhD without any publications on the topic yet, or someone who formerly specialized in the topic before switching topic. Or maybe they’re unavailable because they’ve just reviewed the paper, or because they’re sick, or minding the children, or up for tenure very shortly, or …
None of this is the sort of knowledge editors are typically privy to. So editors spend a lot of time sending invitations to people who are, unfortunately and unbeknownst to them, unavailable, while overlooking people who are available.
By contrast, compare:
Here, the dashed line represents (roughly, ignore questions about whether the knowledge of relation depicted is symmetric and assume each circle pertains to one and only one person’s knowledge, so, for example, read the leftmost dashed circle as saying, of — say — the leftmost smaller circle that it knows the three other circles in its dash are available, but not vice versa) each individual’s localized particular knowledge. Most don’t know any possible reviewers, some know a couple, some, notably, know themselves to be available, and some very well-connected people know many.
If something like this is even roughly an accurate depiction of the knowledge of a community, then if we could collect up that knowledge, then even assuming some unavailability, the greater number of people to ask could lead to considerably shortened time-to-find-a-reviewer.
Moreover, and this is also important, the accumulation of a list of potential reviewers, on this model, goes in parallel rather than serially. By this I mean, as should be familiar if you’re an editor, the marginal cost of finding reviewers increases. Typically, a couple of names obviously suggest themselves but, if they say no, finding a third, fourth, or fifth possible reviewer is often the end of quite a lot of hours recursing google scholar and philpapers (or whatever your discipline uses) citation lists. By contrast, if a large number of people each just quickly mentioned people they knew of, a large number of people would be got without a substantial draw on any one person’s time, and since the time to find a reviewer is a big bottleneck in the system, the time to decision could be much quicker.
Some New Primitives
All that’s aspirational: hopefully you’re now on board with the idea that possibly, pooling the community’s knowledge of reviewers would lead to quicker decisions. The question is: how do we do that? We will build up to the idea by considering a variety of systems that don’t work.
So, first, consider:
Scenario 1. A paper, suitably anonymised in the standard way, comes into a journal. The editor uploads it to a public site. People, using their real name, publicly post the names of possible reviewers. The editor selects one reviewer (I will assume throughout, for simplicity, that only one reviewer is used per submission; I will also, less substantially, assume that all papers are single author. Nothing turns on the latter and not much on the former). They then invite them, and if the person agrees, updates the site and credits the suggestor. Once the reviewer has completed the reviewer, they post on social media (or their website, or some other place on the web that is publicly verificable as theirs) that they have done so, and the maintainer of the site links to the post as verification.
This could work well as a system. But note its failings:
- In posting a paper publicly, the community will come to know that the author of the paper submitted a paper to the journal, because at least some people will know that the paper is written by the author (from conferences, department working seminars, friendship, etc.) This violates the anonymity constraint according to which one can submit a paper to a journal without this becoming public knowledge.
- Because of this, it is possible that the elected reviewer will come to know the author’s identity
- The author will know the reviewer’s identity
- The community will know the reviewer’s identity.
Take the author-related problems first. The problem, basically, is that there is a scrutable one-one relation between papers and authors in the sense that a paper determines uniquely an author, and that a given paper determines a given author will often be known by people in the community who have access to the paper.
But now think: why did we post the paper publicly? Well, to find reviewers. But then we can ask: do we need to post the whole paper in order to do so?
No, surely not. That’s overkill. So, you might think, we could post the abstracts and keywords. An abstract and keywords won’t scrutably determine an author.
Except they probably will: googling it will turn up a preprint, or a conference booklet, or a PhD abstract, that uses similar enough language to let one determine the author’s identity. Anything written by the author will bear enough traces of them to be able to be found by a dedicated searcher, at least often enough that this won’t be a reliable solution.
So, and I’m going to italicise this so you know I think it’s important, we want something that is informative enough to let the community suggest reviewers, but uninformative enough as to not lead people back to the author.
But that’s achievable. For example, we could have members of the editorial team write summaries in their own words. Let me use an example from a paper of my own.
“I subject the semantic claims of stage theory to scrutiny and show that it’s unclear how to make them come out true, for a simple and deep reason: the stage theorist need tensed elements to semantically modify the denotations of referring expressions, to enable us to talk about past and future stages. But in the syntax of natural language, expressions carrying tense modify verbs and adjectives and not referring expressions. This mismatch between what the stage theorist needs, and what language provides makes it hard to see how the stage theorists’ semantic claims could be true.”
I gave this paper at a couple of conferences and although reality conspired to mean that googling the above doesn’t lead one to those conferences, it easily could have happened. But if this is reworded:
“This paper is at the intersection of metaphysics, specifically philosophy of persistence, and formal semantics. It looks at the stage theory defended by Katherine Hawley and Ted Sider and presents objections by arguing that it can’t account for some linguistic data. It is suitable for anyone with a background in metaphysics and natural language or logical semantics.”
Now, it’s possible that this is either too informative — that it will reliably lead back to me. Or again it’s possible that it’s too uninformative — that it will not give people enough information to go on. But prima facie, it seems like it could walk the fine line between giving enough information to enable people to suggest reviewers while not divulging the author’s identity. And even if this is wrong, there’s a lesson: that there might be a fine line here that would enable us to get both community participation and anonymity (note, for example, that a paper’s bibliography, perhaps with an even more terse summary, might be able to play this role if editor-written summaries couldn’t, or again a word-cloud generated by the paper but not written by author might also work).
So that’s suggestion 1:
- Paper summaries (/bibliographies/word-clouds) instead of papers (/abstracts).
Scenario 2. A paper, suitably anonymised in the standard way, comes into a journal. The editor summarises it and uploads the summary to a public site. People, using their real name, publicly post the names of possible reviewers. The editor selects one, invites them, and if the person agrees, updates the site with the reviewer’s name and credits the suggestor. Once the reviewer has completed the reviewer, they post on social media that they have done so, and the maintainer of the site links to the post as verification.
Again, the problem here is obvious: both the author, and the community, will know the referee’s identity. But, again, we can walk the fine line — try to think of a way to release enough information about the reviewer that will enable the community to assess their bona fides, without giving so much that anyone can definitively identify them.
But, again, that’s achievable. I propose that when a community member suggests a reviewer, they do two things: they privately communicate the name to the editor, and publicly post an obfuscated CV of the suggested reviewer.
Obfuscated CVs are the second primitive idea, so let me take a minute to explain them. Whether one is qualified to review a paper typically depends on one’s track record of publishing papers. For example, for a paper on a mainstream topic in the journal for which I work (a generalist philosophy journal), a CV consisting of at least three papers from (other generalist philosophy) journals like Synthese, Philosophical Studies, etc. is typically necessary.
Abstractly, we can think of a CV as just a list of papers, represented, say, as <venue, year> pairs. Moreover, there is a hierarchy in many disciplines, where some venues are perceived as being higher status than others. This suggests the following strategy: given an input CV of a potential reviewer, obfuscate it by removing (obviously) the titles of papers, changing the year of publication, and arbitrarily changing the venue for one similarly ranked. For example, given a CV (which, in the case of philosophy, could be taken from philpapers.org and parsed as a JSON file, as in the below example):
Venue: “Philosophical Studies”
We could, assuming that Australasian Journal of Philosophy and Philosophical Studies, and Mind and Philosophical Review are the same status, obfuscate the CV as so:
Venue: “Australasian Journal of Philosophy”
Venue: “Philosophical Review”}
The key point is that we could attach this obfuscated CV to the public post if the owner of the CV agrees to review the paper. This would allow an onlooker to verify, roughly, the quality of reviewer for the paper in question without giving away their identity.
At this point, it might help to picture the idea:
This is a screenshot from the envisaged publicly available site. The summary and keywords are written by the editor, the date automatically added by the software.
Anyone can — pseudonymously, to be explained elsewhere — recommend reviewers. In so doing, they publicly post the obfuscated CV of the suggestion. The above screenshot shows two recommendations, and thus two obfuscated CVs. One is in red which is meant to indicate that the editor invited the owner of the CV, and they agreed, to review it. This really represents the bundling of three different stages in the process: first, the editor writes and posts the summary; then, people make recommendations; and then, having invited people and secured a reviewer, the editor updates the site.
That said, hopefully you can see the weakness of the idea as it stands. Anyone can suggest reviewers. So, most notably, the editor themselves can suggest reviewers. What that means is that they can pick an arbitrary person with a decent CV, suggest them, mark the suggestion as accepted, review the paper themselves or otherwise mistreat it, and it will appear as if it was properly reviewed. One of the things we want to move away from is malfeasance owing to an excess of centralized power; so far, what we’ve presented doesn’t achieve this.
Clearly such a system would be a failure; no one should trust its claims. We need an independent way to verify that the reviewer reviewed the paper, and one, moreover, that the editor — the most powerful person in this set-up — can’t game. Well, of course, we already have one:
Scenario 3. A paper, suitably anonymised in the standard way, comes into a journal. The editor summarises it and uploads the summary to a public site. People, using their real name, publically post the obfuscated CVs of possible reviewers. The editor selects one, invites them, and if the person agrees, updates the site with selected OCV and credits the suggestor. Once the reviewer has completed the reviewer, they post on social media that they have done so, and the maintainer of the site links to the post as verification.
But also of course, this is no good. In the above scenario, it becomes public knowledge that a referee reviewed a given paper, letting the author have a (reasonably — more on that below) good idea of who might have reviewed their paper and an onlooker (in a more constrained set of cases) also a way of knowing this. We need a way for the reviewer to prove that they’ve reviewed the paper without giving away their identity.
Here’s a suggestion. In reviewing a paper, one undertakes to have one’s identity verified by a certain member of the community, and to verify the identity of a certain other member of the community. You don’t know who identifies you, and the person who identifies you doesn’t know you.
In particular, the person who verifies you is the reviewer of the previous paper in the list, and the person you verify is the reviewer of the next paper. In submitting a review, the editor sends a code to you. That code has already been sent to the reviewer of the previous paper. You then post the code on your social media or website, take the url at which you post the code, encrypt it using the code shared only by you and the other (using the code as, say, a one-time pad), and post it in the submission. The Verifier (and only them), seeing this, has the option to mark the identity as verified, and the submission as thereby finished.
Note that this doesn’t strictly prove that the person reviewed the paper. But it does something very close to doing so. It shows that the owner of the code (known only to the Verifier, the reviewer, and the Editor) has access to the web presence of the reviewer, giving a pretty good assurance that that person posted the request for verification. A completed part of the submission chain, truncated for space and relevance, would look as so:
The ⇙ means that the two submissions are connected in that the reviewer of the top one has verified the identity of the reviewer of the lower one. This is meant to reflect that we can chain together individual identity verifications to form a system that can be overall trusted to have involved reviewers who possess the CV the system says (it might help either for SEO purposes or to help (some) people conceptualize what’s suggested here that ‘chain’ is meant to recall ‘blockchain’, but if you’re an anti-blockchain person, there’s nothing here that should concern you — the analogy is in most places loose). Moreover, the code would be completely open source, so anyone could see how it works, and indeed anyone could have access to any piece of (unencrypted) data in the system (including backend databases, for example, if the architecture eventually required them) with the sole exception of the name of the reviewer, which goes straight to the editor.
A Step Back
That sounds complicated! It’s probably an expositional flaw on my part that I haven’t explained it well enough, but hopefully if you devote some time to it you should see why each part of the system is there, and what work it does. The first key idea is that we replace a strict notion of identity and identification (of authors via the uniqueness of papers, of reviewers via the uniqueness of CVs) with less strict notions that permit anonymity while still enabling much of the running of the journal to be publicly verifiable. The second key idea is that by chaining peer-to-peer identity verifications we can have faith that the claims made by the journal about its reviewers are accurate.
Before going on to round up some objections in a sort of (very incomplete — help me complete it by objecting!) FAQ, I want to briefly consider one aspect I haven’t drawn so much attention to so far, namely the role of the people who recommend reviewers, because they are an absolutely crucial part of the whole thing. I suggest that they get accredited in proportion to the number of (accepted) suggestions they make, so that if, for example, someone makes ten accepted suggestions (they suggest ten people who accept invitations to review), they receive a community editor title in the journal, putting their name on the website and in the print edition. One important thing that is desirable is that the community editors be unknown to the editor: this will prevent biases either positive or negative from creeping in. I won’t spell out how this will be done, although I have the ideas worked out, because there’s already too many novel ideas in this short piece.
You might object: this is a valueless trinket. If people are doing work, they should be rewarded with something actually desirable (money, teaching buyouts, an attaboy from our bureaucratic overlords, etc.) rather than made-up titles. Well, yes. But in part that (how we value community editing) is for the community itself to decide. Some people might desire to collect those trinkets; some departments, recognizing the need for changing how we produce academic knowledge, might come to value them. And moreover, and this is worth emphasising, there are no barriers to entry. Anybody can enter; anybody can help knowledge production, and anybody who does so is helping shift the centre of power away from editors into the community. This is something of inherent value (of very great inherent value, in my opinion), even if the qualifications are only of dubious instrumental value.
Bad Idea Because…..
….It will lead to collusion
You might think that opening up the system, especially in this anonymity-preserving way, will open up the possibility of collusion. Anyone, after all, can suggest reviewers for any paper. So, an author can suggest a reviewer for their own paper. That’s surely undesirable! Someone can suggest their colleague/supervisor/husband, said suggestee, on pain of awkward faculty meetings/supervisions/dinner tables, will accept the paper.
It’s true that the system is somewhat vulnerable to attacks of this sort, but only to quite a small extent. To see this, note the obvious fact that journals only invite suitably qualified people. So unless your would-be colluder has a CV that would cause them to be invited anyway, their being suggested in this system won’t lead them to be. Moreover, most journals don’t let colleagues/supervis[or|ee]s/[husbands|wives] review each c/s/[h|w]’s papers. So even if the colluder has the CV, if there’s any evident connection, they won’t get invited. It remains true that this attack is possible. A writes paper, telling their friend B to suggest either B himself (if B is qualified) or big-softie C who accepts everything (or A bypasses B and just suggests C themselves). B(/A) does so, A’s paper is accepted. But note, this requires either the existence of big-softies, the existence of shameless pairs A & B and also that the journal select the paper in question for community review. Moreover, granting the existence of shameless pairs A & B, this attack surface already exists. A cites B heavily in the paper; an editor will be motivated to invite B. B accepts. If this were to happen in this system, that’s probably evidence it already happens in the current one. So it isn’t a new reason to worry about this system.
…It will lead to more knowledge about who reviewed which paper
Consider the following: paper p gets submitted. A reviews it, B verifies A’s identity, the paper gets accepted and appears on the site. B compares it with the list of recent summaries, works out A reviewed it, knows A reviewed it. This is a breach of the spirit of anonymity. But it’s only a small one. First, note that most papers in most journals are rejected. So the cases in which the paper is accepted are few. Moreover, papers often cluster around hot topics, so it’s very plausible that often, there will be a several papers with similar summaries, and even should one be accepted, the others — again, just because most are rejected —will probably be rejected. So the inference from paper p was accepted to the paper corresponding to posted summary s was accepted is a very defeasible one. Moreover, the current system presents these breaches of anonymity: if you review a paper and it gets accepted, you learn the author’s identity.
What I’m talking about is of course different, but it’s hard to see that this particular bit of knowledge will be especially bad for a Verifier to have — they can’t make much mischief with it (it would be different if the Verifier could learn the identity of papers and authors that were rejected on the basis of a review from a reviewer whose identity they verified, but that won’t happen.)
The above proposal has the virtue of being in some respects conservative, and relatively easily implementable. Most importantly, the basic structure of academic publishing, whereby only editors and reviewers know of a paper that it has been submitted and who exactly has reviewed it, is preserved. And preliminary work has already been done in designing the new system that would be required. We can consider the following workflow, along with the status of the technology needed to implement it:
- Paper is submitted: Standard editorial manager software
- Paper is summarised: Nothing required
- Paper is posted to a public site: A host capable of serving dynamic content from a backend database, and of being updated. Freely available services such as glitch.me allow this, and a prototype for this function exists
- People pseudonymously suggest reviewers, in the form of posting their identity to the author and their obfuscated CV publically: A function that takes an author’s CV and turns it into an obfuscated CV has been written, and can be easily incorporated into the glitch prototype. Sending the name to the author is as yet unsolved. Keeping track of the pseudonymous identity of the suggestor is more or less in place
- The author invites reviewers, and updates the site when someone says yes, thereby crediting the suggestor: Standard editorial manager software does the first bit; the second, capacity to update the site, already exists in prototype; and a system for pseudonymously attributing credit exists.
- The referee report is submitted: Standard editorial manager software
- The identity verification sketched above is carried out: This is not yet done, even in prototype.
8. Community editors claim credit: Can be done by email; not currently implemented.
That is the say, the above idea is highly within the realms of technical feasibility. The only outstanding work to do is 7, but I don’t anticipate great difficulties.