Authors retract brain-mapping paper after reviewers flag fabricated citations
When a paper claims to decode neural signals, the stakes are high. Researchers announced a new method that would let scientists “interpretably map” brain activity, promising a shortcut to understanding cognition. Yet the manuscript never made it past peer review.
Instead, the authors pulled the work after reviewers—suspected to be automated language models—identified a litany of bogus references. The bibliography was littered with invented titles and placeholder names like “Jane Doe,” a red flag that the citation list had been generated rather than curated. This discovery sparked a broader conversation about the reliability of review pipelines that rely on AI tools without human oversight.
It also raised questions about how many submissions might contain similar fabrications before they’re caught. The fallout underscores a growing tension: the convenience of large language models versus the need for rigorous, accountable scholarship.
The study promised an interpretable mapping of brain activity but fell apart after reviewers discovered numerous fake citations. The reference list contained completely fabricated titles and placeholder names like "Jane Doe" as co-authors. A reviewer flagged the obvious use of a language model and issued a "Strong Reject" recommendation.
The authors revised the manuscript and references, but additional errors surfaced, leading them to withdraw the paper altogether. In another case, "Efficient Fine‑Tuning of Quantized Models via Adaptive Rank and Bitwidth", the authors withdrew their submission in protest after receiving four rejections. They accused reviewers of using AI tools to generate feedback without reading the paper.
Did the peer‑review process break down? The brain‑mapping paper was withdrawn after reviewers flagged a reference list riddled with fabricated titles and placeholder authors such as “Jane Doe.” Researchers from elite universities had apparently generated those citations with a language model, and a reviewer called the practice out. Authors, frustrated by what they describe as lazy, AI‑driven critiques, chose to pull their work rather than endure a review that seemed unread.
Preparations for ICLR 2026 have already exposed “AI‑shaped cracks” in the system, according to Reddit posts and community discussions. Yet it remains unclear whether the reviewers themselves were LLMs or merely relied on AI‑assisted tools. The incident underscores how easily fabricated sources can slip into a submission when scrutiny falters.
Whether this episode will prompt tighter verification of citations is uncertain, but the episode illustrates a tangible vulnerability in current academic gatekeeping. The scientific community now faces the task of reinforcing standards without resorting to unchecked automation. A troubling sign.
Further Reading
- Neuroscience journal retracts 13 papers at once - The Transmitter
- Daily briefing: Landmark Alzheimer's paper will be retracted - Nature
- Spurious reconstruction from brain activity - arXiv
- Alzheimer's scientist forced to retract paper during his own replication effort - The Transmitter
- Journal to retract Alzheimer's study after investigation finds misconduct - Retraction Watch
Common Questions Answered
Why was the brain‑mapping paper retracted after peer review?
The manuscript was withdrawn because reviewers identified a bibliography filled with fabricated citations and placeholder authors such as "Jane Doe." These fake references indicated the use of an automated language model, prompting a strong reject recommendation and leading the authors to pull the paper.
What specific evidence did reviewers find that indicated the use of a language model in the reference list?
Reviewers spotted numerous invented titles and generic author names like "Jane Doe," which are typical hallmarks of AI‑generated text. The pattern of nonsensical citations raised suspicion that a language model had been used to fabricate the bibliography.
How did the authors respond after the initial reviewer flagged the fabricated citations?
The authors attempted to revise the manuscript and replace the bogus references, but additional errors emerged in the updated bibliography. Facing continued criticism, they ultimately chose to withdraw the paper rather than continue the review process.
What does this incident suggest about the reliability of AI‑driven critiques in the peer‑review process?
The incident highlights that while AI tools can help detect anomalies like fabricated citations, reliance on automated reviewers alone may miss deeper methodological issues. It underscores the need for human oversight to ensure thorough and accurate evaluation of scientific claims.