Options for improving peer review
(This is pitched at, and uses examples from, academic philosophy, but should generalize to at least some other disciplines. Comments are very welcome.)
I present a simple, easy to implement, and relatively conservative modification of the current system of peer review that promises to enable researchers to hedge risk in the very unpredictable environment of academic publishing, to let reviewers have more control over when they conduct reviews, and to reduce the average time to decision. The proposal thus offers benefits to authors, reviewers, and editors.
Everybody hates, but many rely on, peer review. Heedful of the later fact, I want to present a mechanism that will mitigate the former one.
To do so, let’s think about some of the problems of peer review. I think that a lot of them can be summarized in one word: ignorance. The current system aggregates the knowledge dispersed among a given academic community terribly. For example:
Authors considering a journal don’t know how many papers on the topic the journal has reviewed, thus what size its pool of potential referees for the paper is, thus the expected time it will take to find a referee.
Reviewers are obliged to review papers because they are obliged to have papers reviewed, because they are obliged to publish papers. But reviewers don’t know when they’ll be required to perform this obligation — when they’ll be asked to review a paper. And while they can always say no, if they knew in advance when they’d be called upon to review, they could manage their schedule better.
Editors don’t know much. For a given paper, they don’t know how many times it’s been reviewed before; they don’t know everybody who works in the field, so they don’t know the pool of potential reviewers; of the limited pool they know of through either general knowledge or research they don’t know who has reviewed it before, who has just taken on two other reviewing gigs and is at capacity, who has just had a kid or got sick. Of those available in the limited pool they don’t know how good or how quick the reviewers are.
I think a useful analogy for thinking about peer review — although I won’t go into much detail in this short piece — is as an economy, and in particular as a centrally-planned economy. And I think considered as such it falls victim to the famous Hayekian critique of such economies. The central planners are the editors, and the commodity they distribute is reviews.
(The review is the fundamental commodity in the peer-review economy around which any theorizing in this vein ought to turn, because reviews are the raw materials of published papers that makers (and indeed editors) of papers fundamentally and essentially lack and need; the paper, which we might think of as the more obvious bearer of value, is derivative of the value of reviews of it, since it’s only once we add positive reviews do we turn a worthless (to search committees, REF-panellists, etc.) manuscript into a career-advancingly valuable peer-reviewed publication¹).
And the problem is that the central planners lack the particular knowledge, of reviewers and their capacities, that could enable a smooth planned review economy. Hayek’s solution, of course, was that we should have market economies in which people are incentivised to reveal their local knowledge by the price mechanism, which is a decentralized means of distributing goods that reflects how valuable they are to a person at a time. Using this idea, roughly, I will present a first attempt at moving from a planned to a market economy of peer review, a decentralized system that incentivizes reviewers, the key makers of value, to reveal their knowledge.
My idea is simple to state. A paper comes into a journal. The editor posts its keywords to a publicly available website; this doesn’t compromise anonymity. Potential (suitably qualified — more on this below) reviewers who have the capacity to review it announce themselves to the editorial system by emailing their name and the paper ID to the editors’ address. If the reviewer is suitable, they are assigned the paper.
It takes, I would say, on average 1.5 hours to find a reviewer for a paper — sometimes it’s much quicker, but often it requires hours in front of philpapers and hours recursing citation lists in google scholar. Then it requires visits to personal websites, assessments of how likely the reviewer is to have already reviewed it, and in general to accept it, and the sending of invitations and often reminder emails. Multiply this by several hundred per year, and time adds up.
By contrast, it takes a qualified review in the order of minutes to determine if the paper might be suitable for them; if they have the free time to review it promptly; (possibly) if they have reviewed it before; and to send the email announcing themselves.
Not only that. On this way of doing things, the reviewer gets to decide when they review. Academics who receive reviews are duty bound, I assume, to provide them: we review others’ paper because others review ours. But in the current system, one isn’t allowed to decide when to perform this duty. Instead, you just get invitations whenever, perhaps at less than ideal times. By contrast, in this envisaged alternative, you can choose to review when you wish.
So maybe this sounds good — the reviewer announces themselves, and the paper goes under review that day, rather than after possibly weeks of searching. But still, you might think, with your realist hat on: is anyone ever actually going to volunteer to review?
That’s where the second main part of my proposal kicks in. If you announce yourself and are suitable, you have saved the editor 1.5 hours research time. But here is something they can do with that time: find potential reviewers for a paper of yours should you choose to submit it to the journal in question. You conditionally submit it, giving the editor a chance to find a reviewer while it perhaps gets assessed at other journals. That means that if you do choose to submit it, from literally the day it arrives it gets sent out to potential reviewers, cutting weeks off the time to decision.
Here’s a way to think about this. In announcing yourself and being assigned a review, the journal offers you a review option.² This is a token that you can use to enable you to have the option to submit your paper at some time in the immediate future with the assurance that potential reviewers have already been researched for it, and thus with the assurance that the expected time to decision, should you choose to submit to the journal in question, will be much less than the average across journal in general and even this one in particular.
This is potentially very valuable. Think about this all too realistic scenario. It’s February, and you’ve just finished your first paper of the year. You gave it at conferences in the holidays, worked on it in snatches during the busy start of term, and it’s now in good enough shape to be sent out. You’re interested mainly in journal A, secondarily in B. The journal in question (let’s call it Y) is maybe fourth on your list.
But you’re worried. You’ve heard some bad things about A. It has its good reputation for a reason, and it has been deluged with submissions in the last couple of years, and you’ve heard some horror stories — nine month and even longer waits.
This matters. Your annual review is 15 December. It would be great if you could present this paper as part of your year’s research accomplishments. But what if you submit to A and draw the short straw and wait nine months. Then it’s November when you submit to B, and there’s no way B will have (good) news before your meeting. But for want of anything better to do, you send it off to A and hope and pray. This is the sort of decision many make all the time, and causes a lot of stress. Anything that could alleviate it should be of great interest to us.
(Throughout, I assume each paper gets one reviewer at each journal, just for simplicity.)
Not only that, but because you’ve just finished a project, it’s a great time to do some reviewing. If you set yourself the target for reviewing as many papers as you submit, it would be great to immediately offset the paper you’re about to submit by reviewing somebody else’s before you jump into your next project.
So you surf to the site, and see some keywords that, by chance, are the area in which you’ve published. You announce yourself and are assigned both the review and the option.
Fast forward to October. A rejection comes from A. Although B is pretty good, you’re still concerned that there isn’t time: even assuming B provides an around 3 month review, you might just miss the 15th. There might be time at Y, though. You exercise your option, submit the paper, and invitations to review your paper get sent out that very day, and the decisions come back well before the 15th. It might not be the decision you want, of course, but if you think peer review is just a numbers game, you’ve played the game as best you could.
In sum, then, offering review options in exchange for reviews promises much. If interest in them is sufficiently large, people will check the site which post keywords frequently, and people will frequently come to review a paper by announcing themselves as suitable in exchange for an option, and papers will get assigned reviewers quicker. And if the option is exercised, those submissions, because they’ve been pre-researched, should also be dealt with quicker. And finally, the option should offer researchers some peace of mind, the ability to hedge against the always unpredictable future, and the ability to review on their schedule. All this suffices to make review options a highly attractive proposal that deserves our serious attention.
Comments, Objections, Replies
Who can announce themselves as potential reviewers?
Any reviewer found through the options system has to have the same qualifications as one found via the normal process of reviewer research. There aren’t hard and fast rules, but here are some rough heuristics
Very important note: If you don’t fit these guidelines, this doesn’t mean we think you are unable to provide good reviews. We definitely don’t think that (indeed, these criteria exclude some early career researchers (and indeed some late career ones who don’t publish much) in whom we place utmost trust and rely on for reviews), but, in the absence of particular knowledge of you, we need to rely on publication record).
(*) For a topic that is often published in generalist journals: at least two publications, closely related to the topic, in good journals, including, but not limited to, Philosophical Studies, Australasian Journal of Philosophy, Synthese, Erkenntnis, Ergo, Philosophical Review, Mind, Nous, Philosophy and Phenomenological Research, and so on.
(*) For history papers, at least two publications on the figure/era in question, preferably at least one in a generalist journal, and one either in a good generalist journal (British Journal for History of Philosophy, Journal for the History of Philosophy, etc.) or in a good one devoted to the historical figure/era.
(*) For papers in subfields that appear frequently in specialist journals, and often require specialist extra-philosophical knowledge such as philosophy of physics, philosophy of cognitive science, philosophy of logic, and feminist philosophy, two papers, preferably one in a mainstream journal and one in a specialized journal (the philosophy of physics ones, Journal of Philosophical Logic, Journal of Consciousness Studies, Hypatia).
(*) For early career researchers (grad student or recent graduate), one publication in a good journal might be sufficient, along with a PhD on the topic; one publication in an upper-tier journal (Mind, PPR,Nous, Phil Review) is sufficient. For those later in their career, more than two might be viewed as necessary.
Doesn’t this risk collusion?
Consider this dialogue:
Supervisor: Submit your paper to Y. I’ll announce myself as a reviewer for it, say accept, and in one week you’ll have a shiny new publication
Student: Okay I guess
Supervisor: (in a dastardly fashion) Bwhahahahahaha
Isn’t there a risk that things like this will happen, that people will collude by telling others they are submitting to Y, and have those others announce themselves and get to review the paper?
First, note that collusion is always a problem. No system can have zero collusion, because bad actors will always get the chance. Indeed, good actors will sometimes be forced to review for their friends in very tiny subfields. The thing to do is minimize collusion. Here’s how we’ll do it.
First, we use the same sort of obstacles we use anyway: co-authors can’t review each other; people in the same department can’t review each other; people standing in the supervises relationship can’t review each other. Second, we require that options-secured reviews be antisymmetrical: if you perform an options review of a paper of a person, that person can never perform an options review of one of your papers. That should help make peer collusion less attractive because both would-be colluder won’t benefit (it doesn’t do anything to capture supervisor:ee collusion). Third, we only make a limited number of papers available for the options market, and that is determined by chance. This excludes people whose preference ranking is: {definite free Y publication> other journal publication>,…, possible Y publication}. Fourth, for some papers we’ll back them with a properly sourced review. Fifth, the field of possible partners in collusion is limited by the fact that the partner must be qualified to review, per the above.
Are keywords enough information to enable potential reviewers to announce themselves? Are keywords not enough information to enable potential reviewers to identity authors?
The list will be something like this:
Physicalism, formulating physicalism, the problem of abstracta, fundamentality, grounding
Mental content, Frege’s puzzle, indexicality, de se attitudes, propositional attitudes
Endurance, object-dependence, Lowe, presentism
Modality, tense, modal logic, temporal logic
To answer the second question: I assume we can agree this isn’t information enough to reveal the identity of the author (if you work in these topics, can you guess the authors?) But might it not run the opposite risk of being too sparse to enable reviewers to decide if they are suitable?
Ultimately, that’s an empirical question, to which I can only say that I think the answer is no. If the answer is yes, this proposal will fail, but we will at least have learned this important fact.
Won’t this proposal increase inequality by favouring already comparatively advantaged people and not favouring others?
One big problem with this proposal is that those to whom review options would be most useful — junior people for whom the marginal value of publications is very, very large — are exactly those who won’t be able to have them, because they will lack the reviewing qualifications. Review options, you might think, will by their very nature tend to get misallocated in the hands of those who need them less.
The solution to this is the same as the solution to problems of misallocation in the real economy: redistribution. And this in two ways: first, we allow individuals who have a review option to transfer it directly to someone else. A supervisor can transfer their option to their grad student, for example. And second, we enable people to give up their options which get put in a pot that is then randomly allocated to junior scholars who choose to enter frequently held lotteries.
If I own a review option, and exercise it, how can you guarantee that my paper will be reviewed quickly and well?
I can’t. I can only up the probability that it will. I can up it guaranteeing to find at least three possible (qualified) reviewers before you submit it, and by guaranteeing that I will ask those reviewers on the day your paper is submitted. Important to note is that these referees have nothing to do with the options system: inviting them is just like inviting anybody. They might ignore the invitation, they might be too busy, they might say yes but then defer it, they might return a poor quality review. I can’t guarantee they won’t. You’re getting a boost in the chance that you’ll get a quick, good quality, decision, no more.
What happens if I don’t want to participate, but do want to submit to the journal?
Not much. Your paper will get treated (almost) exactly like it would have gotten treated had you submitted it before the system was in place. The editor will research reviewers in the normal way, send invitations, and so on. The only difference is this: if you submit a paper close in theme to one which has an attached option, then the editor can’t ask those people listed as potential referees for the optioned-paper to review yours.
An example will make this clearer. If your paper is on grounding, but an option-having philosopher has an option for a grounding paper, and Bennett, Bliss, and Bøhn are the potential reviewers, then none of the 3 Bs will get invited for your paper. This is not really any different than if just before you submitted, three papers on grounding had been submitted and each had one of the Bs review it. But nevertheless it’s a possibility that actually submitted papers will suffer because reviewers will be held back for papers that only might be submitted, and this is a drawback of this approach.
What happens if no reviewer announces themselves?
Nothing. The editor finds a reviewer in the normal way; once they have found one, the keywords get removed from the site.
These last points emphasize something I think is important: this is a conservative modification of the system. Although it offers, it seems to me, a lot of benefits, it doesn’t fundamentally change the structure or workings of the journal. Neither its success nor its failure will stop the journal from running as it has done before, receiving, reviewing, and publishing papers like any other journal.
Finally, shouldn’t we just abolish peer review/disentangle publication and academic progress/prevent grad students from submitting/impose actual monetary costs on submitting/etc?
The final objection is that this problem is too incremental and technocratic. The peer review system needs deep and fundamental changes at most every level, and this moderate increase in efficiency is keeping something on life support that would better be euthanized.
While I might agree with this sentiment, I think this line of thinking is a bad one. People today suffer because of the peer review system today; solving the collection action problem of moving beyond peer review is something we have little hope (and seemingly little real appetite) of doing short-term. So I think it’s much preferable to patch a broken system today, and thereby improve people’s lives a bit, than propose massive overhauls that will be hard to implement.
Conclusion
The progress of human knowledge and the well-being of its uncoverers are impeded by an institution not fit for purpose. Trying to fix that institution is an important goal (an extremely important goal, I argue here), and options offers a chance to do so, so we should implement them. I have omitted some details of such an implementation, but as far as I can tell they don’t affect the big picture idea I’ve tried to get across here.
Added 23 June 2020: From summer 2020 I’m going to move my occasional writing from medium to tinyletter. If you want to read more from me in your inbox, please consider signing up: https://tinyletter.com/mittmattmutt. I’ll post relatively infrequently, and hopefully interestingly, on the same sort of themes as the blog, so: popular philosophy/explainers, culture, literature, politics/economics, etc. I might also do things like brief reviews of books I read and so on.
Footnotes
1 By ‘worthless’ I of course mean worthless considered as a token in the accreditation system of academia, not as a piece of academic work. It’s obviously completely wrong to think that an unreviewed manuscript is ipso facto epistemically worthless; it’s probably even wrong to think that most manuscripts reviewed and rejected multiple times are ipso facto epistemically worthless (we’re all familiar with Nobel price winners talking about how their idea was rejected).
2 The name is meant to evoke financial options. A soybean farmer, fearful that tariffs will cease that have impeded soybean imports and thus that soybean prices will lower next year can buy an option to sell, in the future, their soybeans at a prearranged price that is set now, thus shielding themselves from that potential source of volatility. So our reviewer could shield themself against possible volatility in peer review. The analogy is, and is only meant to be, rough.