January 29, 2013

This increasingly frustrating peer review process

Academic people barely share their bad personal experiences related to peer reviewing. But everybody has papers rejected in conferences… and these decisions sometimes generate legitimate frustration since they seem to be due to some "random bad outcome from this plain old flawed reviewing process". On my side, I have the feeling that reviewing process is getting worse and worse. I am not alone. Following this example, I describe below some recent reject notifications that illustrate some of these flaws. And I propose some ways to fix them.

The un-rebutted rebuttal
In 2012, both ICME and Sigcomm conferences introduced a rebuttal in the reviewing process. I know a lot of scientists who call for such rebuttal process. Unfortunately, my experience of rebuttal was absolutely disastrous on both cases. It is interesting to note that these conferences are definitely not in the same league.

For ICME, I suspect one of the reviewer to be a weak graduate student: he gave us a strong reject based on his claim that one of the four proofs of the paper was wrong on a specific equation. Unfortunately his mathematical statement was false. This bad review was the perfect case where a rebuttal can help to fix a clear misunderstanding and a wrong analysis. We spent a significant part of our rebuttal trying to politely fix the mathematical error of this reviewer. Hélas, we received our negative notification. The reviewers did not change any word of their review. And the meta-reviewer gave us this unforgivable remark: "The authors thinks that the reviewer 2 misunderstand the work in this paper. From the comment, the reviewer should be an expert in this field". This meta-reviewer does not understand rebuttal, does he?

For Sigcomm, one of the reviewers claimed that our 14-pages long proposal can be done by tweaking another existing system. More precisely, the reviewer "believes that with simple changes to your problem, one can use the [other] system to tackle it, probably by just changing the utility function." We knew well this said other system… and we double-checked again. No, there is no way, both papers share some words, but they are like apples and oranges. However, this was the main strong drawback raised by this reviewer, so we were full of hope that we could make our case by carefully explaining the differences with this previous work. Hélas, triple hélas, one month later, the reviews arrived, unchanged.

In both cases, rebuttals came back without any changes, even when we highlighted some major wrong analysis.

Proposal: I don't believe much in rebuttal, but at least this proposal deserves a better implementation. In particular, reviewers must address the remarks that authors made about their reviews.

The anonymous reviewer
We submitted a reasonable paper to a special issue of IEEE Transactions on Multimedia. One reviewer was vaguely positive, one reviewer was vaguely negative, and then came the third reviewer… This guy did not find any positive comment to do. It looks like none of these 14 pages was worth anything. Moreover, all his negative comments were excessively aggressive and mostly based on wrong self-proclaimed facts. The review was just a piece of harsh and assertive remarks. This paper was not a Nobel Prize, for sure, but it was a honest, valid paper, with a motivation based on a series of observations from well-established measurement systems, some theoretical developments, and a non-trivial simulation. Maybe not worth a publication in this journal, but why so much hate?

One well-known issue of peer reviewing in computer science is the excessive harshness of reviewers, often young scientists, comfortably protected by the anonymity. In the excellent "Guide for Peer Reviewing", it is said that, as far as possible, the first paragraph of a review should summarize the goals, approaches and conclusions of the paper (including positive assessments) while the second paragraph should provide a conceptual overview of the contribution.

Proposal: Some reviewers would be less assertive, and less aggressive if there were any probability that their identity would be revealed. Why not having a "out of the k reviews you do for a conference, one of them will be randomly chosen to be de-anonymized." Or a "one out of ten reviews are de-anonymized".

The no-room-for-cold-topics program chair
We sent a P2P paper to Globecom, although it is well-known that P2P is now a very cold topic. We received two clearly positive reviews, and one review slightly more negative in the grades, but with comments like "The addressed problem is relevant, the paper is well-written and technically solid". Globecom has a 37% acceptance ratio, but despite these grades, our paper has been rejected. My first reject at Globecom.

I asked some additional explanations to the TPC chair, and he kindly answered that "in the confidential comments, there was a voiced concern about novelty". In other words, it seems that anonymity is not enough for reviewers, they still require an even more anonymous place to assess the judgements they are the less proud of. According to the guide of peer reviewing, the "confidential comments" are just a bad habit, which affects the overall transparency of the reviewing process. On my side, I never use it, and I don't find any convincing point for using it.

Proposal: ban the confidential comments.


  1. I recently had some negative reviews without providing any guidance how to improve the paper. Hence, our decision was to resubmit elsewhere without many changes, hoping for a better reviewer.

  2. We had one to, a properly nice paper, which got to publish without changes (!) in ieee transactions, and one reject based on "could have done more measurements". The final reject was due to something else again...

    We actually ended up publishing a short and slightly pointless version in a workshop and started a company to continue our work. Don't have much belief in the academic system any more!