- reviewing the proposals. As I explained in my previous post, each proposal is a 100-pages long document, which is written by "artists". Three criteria are evaluated: one about the scientific soundness, one about the consortium quality and one about the exploitation. Every criteria is evaluated on a scale ranging from 0 to 5 where half-marks are accepted.
- reaching a consensus. For each proposal, five people (the three reviewers, one recorder and a moderator) meet during one hour. The goal is to reach a consensus, which should result in a unified text and a final score for every criteria. The role of the recorder is crucial. She has not read the proposal, but she looked at the reviews, so she knows the main trends. From the meeting discussion, she tries to extract some statements, then her text is revised "live" by the reviewers (and sometimes by the moderator). Wording is considered as important, so some sentences require up to 15 minutes to be accepted by every reviewer. In general, meetings are lively because some reviewers disagree, and it is common that reviewers actually argue. A consensus is reached in most meetings, but frequently in a unpleasant way because an enthusiastic reviewer has few chances to convince both other reviewers, and a positive-but-not-that-much consensus does not produce a winning project. In case of unreachable consensus, additional reviewers are invited to read the proposal. Eventually, a score is voted.
- deciding. The panel committee meeting is like a program committee meeting except that a ranking is produced (even rejected proposals are ranked). The overall note ranges from 0 to 15, but of course, most proposals are between 8.5 and 13.5. There is a critical tie on a high score (around 13) because only a fraction of proposals having this score can be funded. A specific algorithm is used to break ties. In our case, proposals are ranked based on:
- the highest score in the Exploitation criteria, then, if tie again,
- the highest score in the Scientific criteria
- the largest ratio of industries
- the largest ratio of SMEs
- the largest ratio of partners from new member states of EU
The role of the reviewers is actually marginal during this meeting: checking the consistency between final texts and final score for every proposal. Downgrading (or upgrading) a proposal after a quick cross-reading is very rare, and deserves a long agreement discussion from the panel.
The overall process suffers from a drawback: reviewers spend a lot of time on bad proposals. Every proposal, even the worst one, requires one hour of consensus meeting. Saving this time could let reviewers read a subset of the best proposals, and increase the quality of the final choice. In the panel meeting, a long time is also wasted on revising the text for every proposal, even the ones that will not be funded, although panelists do not have sufficient time to discuss the borderline proposals.
The consensus part is funny. The overall result of the consensus meetings is rarely a "blind union" of three independent reviews (as it is done in most conferences, like averaging three scores). For example, three reviewers adopted a 4.0 for the scientific part (which is a very good score) but they reached a consensus with a score of 2.5 (which is below the threshold) because the flaws they identified were complementary, or because they discovered that they share an overall lack of excitement about the proposal, so they took time to detect actual flaws justifying the reject.
The overall process is fair, and there is no way to express any subjective opinion, like this topic is funnier than the other, or these guys should be assisted because their country bankrupts, or I don't like this crappy acronym.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.