“Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” Winston Churchill, 1947

I just received an e-mail inviting me to participate in the Second Annual International Symposium on Peer Review that seeks to examine the application -- and perceived failure -- of peer review in a scientific context. The aim of the conference is certainly a worthy goal, and I applaud the effort; however, it brings up the conundrum of how one goes about peer reviewing materials for a conference on peer-review that takes as its premise that the peer-review system is flawed. So flawed, in fact, that the letter opens with the quote "only 8% agreed that 'peer review works well as it is'." (Chubin and Hackett, 1990; p.192).

The solution, as it turns out, is to use three different peer-review systems to review abstracts and papers submitted for the conference.

All Submitted papers/abstracts will go through three reviewing processes: (1) double-blind (at least three reviewers), (2) non-blind, and (3) participative peer reviews.

There is no mention, however, how adjudication between these three different systems would occur. It seems to me that this is a suboptimal process of review. What, for instance, would happen if a paper is reviewed soundly as worthy under one of these methods, but not the other two? Would that paper be accepted because of the potential faults of the two systems upon which it was not deemed worthy, or is the system a "best-of-three" trial where two systems must deem it worth to be accepted? It seems like this is asking an awful lot of reviewers and, if adopted widely, would make problems of finding reviewers for manuscripts even harder than it already is.

In addition, this calls into question what the purpose of the peer-review system is. At another point in the solicitation, the organizers state:

Empirical studies have shown that assessments made by independent reviewers of papers submitted to journals and abstracts submitted to conferences are no reproducible, i.e. agreement between reviewers is about what is expected by chance alone.

Based on this, it seems that the organizers take it as the responsibility of the reviewers to decide what is acceptable; I would argue that responsibility lies, instead, with the editors. This also assumes that the peer-review system is put in place to determine what is the "correct" answer to problems rather than determine whether the manuscript reflects the proper state of the science and makes a contribution for further discussion by scholars (indeed, this is the position that is taken by Paul Courant in a thoughtful series of posts on peer-review). If we take the view that journals and conferences are designed to present articles that are (a) competently executed and (b) useful to researchers in a field so that they can establish and continue important scientific conversations within a discipline instead of providing the definitive concluding argument on any given topic, then I'm not sure that peer-review is such a terrible system or that disagreements between reviewers is a good metric to measure its failure.1

It seems like having the same papers submitted under review to the three different procedures for this conference will yield interesting data on the reliability between the three methods, and this may be the goal of the system used for the conference. But, given that this is the system that they devised to address the shortcomings of the peer-review system, it seems like there is not a method to get around some sort of peer-review process. Thus, just as Churchill decried the utility of democracy as an inadequate form of government without an obvious replacement, it seems to me that the peer-review process is in the same boat: we know that it isn't great, but it's certainly better than what we have.

  1. Furthermore, in a discipline like sociology that combines scholars across a broad swath of methodological and substantive areas, it is highly likely -- and probably beneficial -- that two reviewers would provide very different thoughts and evaluations of the same manuscript. Such a broad definition of the field, I believe, helps keep the discipline from getting stale and is one of the reasons that I enjoy being a sociologist.  


Pingbacks are open.


Comments are closed.