The un-rebutted rebuttal
In 2012, both ICME and Sigcomm conferences introduced a rebuttal in the reviewing process. I know a lot of scientists who call for such rebuttal process. Unfortunately, my experience of rebuttal was absolutely disastrous on both cases. It is interesting to note that these conferences are definitely not in the same league.
In 2012, both ICME and Sigcomm conferences introduced a rebuttal in the reviewing process. I know a lot of scientists who call for such rebuttal process. Unfortunately, my experience of rebuttal was absolutely disastrous on both cases. It is interesting to note that these conferences are definitely not in the same league.
For Sigcomm, one of the reviewers claimed that our 14-pages long proposal can be done by tweaking another existing system. More precisely, the reviewer "believes that with simple changes to your problem, one can use the [other] system to tackle it, probably by just changing the utility function." We knew well this said other system… and we double-checked again. No, there is no way, both papers share some words, but they are like apples and oranges. However, this was the main strong drawback raised by this reviewer, so we were full of hope that we could make our case by carefully explaining the differences with this previous work. Hélas, triple hélas, one month later, the reviews arrived, unchanged.
In both cases, rebuttals came back without any changes, even when we highlighted some major wrong analysis.
Proposal: I don't believe much in rebuttal, but at least this proposal deserves a better implementation. In particular, reviewers must address the remarks that authors made about their reviews.
The anonymous reviewer
We submitted a reasonable paper to a special issue of IEEE Transactions on Multimedia. One reviewer was vaguely positive, one reviewer was vaguely negative, and then came the third reviewer… This guy did not find any positive comment to do. It looks like none of these 14 pages was worth anything. Moreover, all his negative comments were excessively aggressive and mostly based on wrong self-proclaimed facts. The review was just a piece of harsh and assertive remarks. This paper was not a Nobel Prize, for sure, but it was a honest, valid paper, with a motivation based on a series of observations from well-established measurement systems, some theoretical developments, and a non-trivial simulation. Maybe not worth a publication in this journal, but why so much hate?
One well-known issue of peer reviewing in computer science is the excessive harshness of reviewers, often young scientists, comfortably protected by the anonymity. In the excellent "Guide for Peer Reviewing", it is said that, as far as possible, the first paragraph of a review should summarize the goals, approaches and conclusions of the paper (including positive assessments) while the second paragraph should provide a conceptual overview of the contribution.
Proposal: Some reviewers would be less assertive, and less aggressive if there were any probability that their identity would be revealed. Why not having a "out of the k reviews you do for a conference, one of them will be randomly chosen to be de-anonymized." Or a "one out of ten reviews are de-anonymized".
We sent a P2P paper to Globecom, although it is well-known that P2P is now a very cold topic. We received two clearly positive reviews, and one review slightly more negative in the grades, but with comments like "The addressed problem is relevant, the paper is well-written and technically solid". Globecom has a 37% acceptance ratio, but despite these grades, our paper has been rejected. My first reject at Globecom.
I asked some additional explanations to the TPC chair, and he kindly answered that "in the confidential comments, there was a voiced concern about novelty". In other words, it seems that anonymity is not enough for reviewers, they still require an even more anonymous place to assess the judgements they are the less proud of. According to the guide of peer reviewing, the "confidential comments" are just a bad habit, which affects the overall transparency of the reviewing process. On my side, I never use it, and I don't find any convincing point for using it.
Proposal: ban the confidential comments.
I recently had some negative reviews without providing any guidance how to improve the paper. Hence, our decision was to resubmit elsewhere without many changes, hoping for a better reviewer.
ReplyDeleteWe had one to, a properly nice paper, which got to publish without changes (!) in ieee transactions, and one reject based on "could have done more measurements". The final reject was due to something else again...
ReplyDeleteWe actually ended up publishing a short and slightly pointless version in a workshop and started a company to continue our work. Don't have much belief in the academic system any more!