🤖 AI Summary
Researchers analyzed a toy queuing model of the CS/ML conference pipeline and showed a striking analytic result: if authors keep resubmitting indefinitely, the pool of unaccepted papers evolves as x_{t+1} = x_t(1−p) + N and converges to x* = N/p. That implies each conference will accept roughly N papers no matter the acceptance rate p (Little’s Law): lowering p simply inflates the backlog to ~N/p and multiplies reviewing work. In short, a lower nominal acceptance rate doesn’t reduce the absolute number of accepted papers—it just increases reviewer load and the queue of resubmissions.
Adding realism (authors abandon after T rounds; three quality classes: great/avg/bad at 15/70/15% with acceptance weights 15/5/1) produces actionable implications. Simulations with N=5000 and T=6 show cutting p from 35% to 20% raises reviewer load ≈46% and greatly increases abandonment of average papers (from ~4% to ~24%, a ~478% jump) while bad-paper abandonment goes from ~60%→~77%. Thus lower p weeds out some low-quality work but disproportionately hurts borderline/average papers and wastes reviewing effort. The authors argue for rethinking conference practices (higher effective p, federated venues, lighter review loads, quick review experiments) and highlight that "effective" acceptance rates and authors’ resubmission strategies meaningfully change system behavior.
Loading comments...
login to comment
loading comments...
no comments yet