OpenReview API Flaw Exposes Reviewer Identities Across Major AI Conferences, Revealing 21% of ICLR Reviews Potentially AI-Generated

Image for OpenReview API Flaw Exposes Reviewer Identities Across Major AI Conferences, Revealing 21% of ICLR Reviews Potentially AI-Generated

A critical vulnerability in the OpenReview platform led to the exposure of reviewer, author, and Area Chair identities for ICLR 2026 and other prominent AI conferences on November 27, 2025. This security lapse, which allowed unauthorized access to sensitive metadata through a specific API endpoint, has sent shockwaves through the academic community, raising serious questions about the integrity of double-blind peer review. The incident also highlighted concerns after data analysis suggested approximately 21% of ICLR peer reviews were entirely generated by artificial intelligence.

The vulnerability stemmed from a broken access control issue within OpenReview's API, where specific profiles/search endpoints failed to enforce proper authorization checks. This allowed anyone to query and retrieve names, affiliations, and emails associated with specific conference roles for various submissions, effectively unmasking participants in what is traditionally a confidential process. The bug was identified and patched within an hour, but not before screenshots and compiled lists of exposed data began circulating.

The academic community reacted with widespread concern, with many lamenting the erosion of anonymity crucial for unbiased review. ICLR organizers issued a stern warning, stating that "Any use, exploitation, or sharing of the leaked information... is a violation of the ICLR code of conduct, and will immediately result in desk rejection of all submissions and multi-year bans." OpenReview also indicated they would be contacting multi-national law enforcement agencies regarding the incident.

Amidst the fallout, the exposed data has prompted discussions on advanced analytical methods to scrutinize academic processes. As François Fleuret stated in a recent tweet, "> There could be fantastic llm-fueled graph-theory analysis to do with the ICLR leaked data to detect collusion rings." This suggests leveraging large language models and graph theory to identify patterns indicative of unfair practices, such as reviewers intentionally down-scoring competing papers or coordinated efforts to influence outcomes.

The incident has ignited a broader debate on the future of peer review, the necessity of anonymity, and the increasing role of AI in academic submissions and evaluations. Experts are reflecting on whether such systemic shocks will lead to a shift towards open review systems or a hardening of existing anonymous processes. The OpenReview team has apologized for the vulnerability and committed to reviewing its security protocols to prevent similar issues.