🤖 AI Summary
OpenReview’s public data page appears empty or inaccessible: key fields for submissions, reviews, reviewers, ratings, confidence, coverage and a range of analytics (most lenient/strict, volatile/stable papers, ethics flags, frequent authors) display no entries or “—/No data,” and searching requires login. In short, there’s no downloadable or directly viewable aggregate review metadata on that interface, preventing quick inspection of review counts, score distributions, or paper-level review histories.
This matters because OpenReview is a primary source of peer-review artifacts for major ML conferences; accessible review corpora enable reproducibility studies, bias and fairness analyses, reviewer-model training (quality scoring, recommendation systems), and meta-science research. The apparent data gap or access restriction blocks researchers and tool-builders from computing summary statistics, training ML models on review text/labels, or running longitudinal analyses. Technically, it raises questions about access controls, API availability, privacy policy (anonymization/GDPR), and whether institutional dumps or mirrors (e.g., PeerRead, archived OpenReview snapshots) are the intended route for data consumers. Researchers needing this data should check OpenReview’s API, terms of service, or known public datasets/mirrors rather than relying on the blank public dashboard.
Loading comments...
login to comment
loading comments...
no comments yet