Thursday, August 16, 2012

Reviews / TPCs


I'll sneak in a semi-related post with regards to TPCs (Technical Program Committee) that relates a bit to McKeown's comment about making the SIGCOMM tent bigger, not as big as SIGGRAPH but about four times as big attendee-wise as it is now.

Having recently gotten back a few sub-par sets of reviews, I am feeling a storm of grouchiness with regards to the overall review process and the on-going conference versus journal debate.  Note that this is in no way related to SIGCOMM as it has been a few years since I have submitted a paper there.  We are much better at self-filtering and as of late, most of our work has not been terribly conducive to SIGCOMM either.  Though my top negative review of all time from SIGCOMM still holds a place near and dear in my heart :)

On to my thoughts / recommendations for TPCs / reviewers:
  • I had posted on this a few year's back but I think it is still the case.  Conference papers are for interesting / thought-provoking work, not necessarily perfect nor complete work.  We had a workshop review two years ago for a paper where the reviewers were asking for essentially a journal quality evaluation rather than thinking about it properly through the lens of the workshop / conference.  For that particular workshop review, I joked with my students that if we had the answers at that time when submitting to the workshop (*cough* workshop at INFOCOM 2011 *cough*), we sure as heck would not have been submitting to some podunk workshop.  
  • Having chaired a few workshops, it makes me shake my head when one gets back any peer-reviewed paper that has less than three reviews.  That is just a failure of the TPC co-chairs at quality control and at worst case, one can always backfill reviews as the co-chair.  That is kind of your job.  Some might object and point out that the ultimate job is to ensure good papers but I would argue conferences are there to encourage feedback and discussion.  Bad reviews outside of the top tier conferences (you sort of have no choice on those conferences) makes me unlikely to submit in the future nor encourage others to do so as well.
  • More recently, there have been a few reviews coming back that contain only reviewer comments but no actual ratings, requiring one to click the link to actually see ratings.  Really?  Are we not CS profs and not otherwise capable of automating such things?    
  • A weird review recently came in where a journal reviewer for our E2E QoS work considered the number of citations with regards to evaluating the work.  Seriously?  I must have missed the memo where the solidity of the science is dependent on how many cites your conference work gets. Then again, the reviewer clearly had a bias against all things QoS mumbling about all that QoS work that thinks about bandwidth as a scalar quantity.  On the plus side, the reviewer's comments were so wrong that the review might inspire me to finally write the dynamics paper I had in mind to prove them wrong as I firmly believe the Roberts work from the mid-2000's is still as valid as ever  when looking at large scale aggregation characteristics of flows (i.e. VBR really does become CBR in the aggregate).  The big trick is how to get the data not being the employee of a large ISP.    
I'll wrap up with a note regarding a nice paper written by our chair about advising junior faculty on the role of conferences versus journals.  I won't steal his thunder but the basic gist is that the notion that systems researchers are special flowers that can only do conferences is not really supported by the data. From the conclusion of the article:

"Look for research questions that other people will care about and that you can see a novel way to approach. Do the best research that you are able to do. Publish your work in the best conferences that you can get your work into. Also publish in the best journals that you can get your work into. Don’t think either-or, think both-and.  ...  Don’t buy into the “only-conferences-matter, conferences-are-better-than-journals” viewpoint. ... the empirical evidence is that this view is wrong." 

Food definitely for thought.

No comments: