I’ve done external reviews for program committees my adviser sits on before, but when the ability to sit on a shadow PC came across my email on a day I was feeling adventurous I decided to give it a shot. I had no idea what to expect, and no idea what I would learn from it. We haven’t even hit the PC meeting yet, but a lot has surprised me already.
My prior experience as an external reviewer was relatively opaque to me. My adviser gave me a paper and a review format and I fill in my responses. He gives some feedback and I reflect on that advice next time I read any paper. Even though it was opaque I learned a lot: in retrospect, primarily what a paper that should be rejected (that I did not help write) looks like.
I figured I would share here a few things I learned from being on a shadow PC so far:
Either I have the least expertise among the shadow PC, or I let impostor syndrome get to me and I underestimated my knowledge in research areas. When the reviews came in, I was the lowest or second lowest expertise among 3-5 reviewers in every case! Sure this is possible, but either way it seems silly to have people self-report expertise to go along with their review. Shouldn’t the reviews demonstrate knowledge of prior art and analysis of the reviewed work? I’ll probably just disregard these numbers in the future and maybe check if the reviewer is in the citations a few times.
I thought I understood what gets a paper rejected from the few papers I had read before and a few rejections of my own, however I saw so many new ways to make mistakes by reading 11 submitted papers. I know how easy it is to make the mistakes the authors made, so I want to give good advice in my review, and I’m sympathetic. There is so much to be learned from just seeing the rough cuts of papers that will probably be refined to accepted papers. I’m not sure if the magic intuition to find my mistakes that my adviser has is from his own personal experience or the shared experience of participating in peer review.
There is often a huge variability in review rating. I knew this going in from being on the other side of HotCRP, but it was impressive how consistently different ratings were. Perhaps this is because of the shadow PC’s lack of experience introducing noise. There were papers that by the criteria we tend to think about at UofM I was confident would be consensus accepts, but had 2/5 from two reviewers. Regardless of whether or not this is reflected in a real PC, I’ll be sure to take rejections less to heart in the future.
More to the last point, even papers that were rejected consistently, the reasons varied wildly. It is strange to find what you think is the absolute fatal flaw in a work, and then have a peer you know personally as a good researcher miss it because they were too distracted by enough other issues they were going to reject it anyway. In retrospect it makes sense, but when those first reviews came in I was confused.
So much is up in the air even after the reviews are all turned in. I had been trying to build my heuristics from my review emails- the contents, the ratings, the expertise ratings. It turns out that a lot of papers have a lot to be sorted out yet. Who is going to go to bat for and against each paper and how hard are they willing to fight? I have no clue going in about other reviewers. I knew the process of a PC on paper before, however I didn’t really get how much it came down to human factors. It is a really weird feeling to have all of the information you were trying to gather truth from before, but suddenly grok the uncertainty and humanity involved.
This is what I think I learned from just the review portion of being on a shadow PC. I hope sharing helps other people learn and inspires them to share any feedback in the twitter thread. Also if you are a seasoned professor and see some mistakes, tell me your thoughts there too! Or in my DMs, or Signal. I’m not one to force a privacy model on anybody.