Skip to main content
Researchers in an office hold a brain‑mapping paper stamped “retracted”, as a reviewer points to fabricated citation notes.

Authors retract brain-mapping paper after reviewers flag fabricated citations

2 min read

I was scrolling through a pre-print server and saw a headline that a team had cracked a way to “interpretably map” brain activity - basically a shortcut to reading thoughts. The claim sounded huge, but the paper never cleared peer review. In fact, the authors withdrew it after reviewers - apparently some automated language models - flagged a string of bogus references.

The bibliography was peppered with made-up titles and placeholder names like “Jane Doe,” which pretty much screams that the citation list was generated, not hand-picked. This oddity sparked a wider chat about how much we trust review pipelines that lean on AI without a human check. It also makes me wonder how many other submissions might hide similar fabrications before we notice.

The whole episode points to a growing tension: the ease of using large language models versus the need for solid, accountable scholarship.

The study promised an interpretable mapping of brain activity but fell apart after reviewers discovered numerous fake citations. The reference list contained completely fabricated titles and placeholder names like "Jane Doe" as co-authors. A reviewer flagged the obvious use of a language model and issued a "Strong Reject" recommendation.

The authors revised the manuscript and references, but additional errors surfaced, leading them to withdraw the paper altogether. In another case, "Efficient Fine‑Tuning of Quantized Models via Adaptive Rank and Bitwidth", the authors withdrew their submission in protest after receiving four rejections. They accused reviewers of using AI tools to generate feedback without reading the paper.

Related Topics: #brain-mapping #neural signals #large language models #quantized models #adaptive rank #AI tools #peer review #fabricated citations

The peer-review process looks like it may have slipped. A brain-mapping paper was pulled after reviewers spotted a reference list full of made-up titles and placeholder names like “Jane Doe.” Apparently, researchers from top-tier universities used a language model to churn out those citations, and one reviewer called them out. The authors said the critiques felt lazy and AI-driven, so they chose to withdraw rather than sit through a review that seemed unread.

Preparations for ICLR 2026 have already revealed “AI-shaped cracks” in the system, according to Reddit threads and community chatter. It’s still unclear whether the reviewers were LLMs themselves or just leaning on AI-assisted tools. The episode shows how easily fabricated sources can sneak into a submission when scrutiny weakens.

I’m not sure if this will lead to stricter citation checks, but it does highlight a real vulnerability in today’s academic gatekeeping. Now the community has to tighten standards without leaning on unchecked automation. A worrying sign.

Common Questions Answered

Why was the brain‑mapping paper retracted after peer review?

The manuscript was withdrawn because reviewers identified a bibliography filled with fabricated citations and placeholder authors such as "Jane Doe." These fake references indicated the use of an automated language model, prompting a strong reject recommendation and leading the authors to pull the paper.

What specific evidence did reviewers find that indicated the use of a language model in the reference list?

Reviewers spotted numerous invented titles and generic author names like "Jane Doe," which are typical hallmarks of AI‑generated text. The pattern of nonsensical citations raised suspicion that a language model had been used to fabricate the bibliography.

How did the authors respond after the initial reviewer flagged the fabricated citations?

The authors attempted to revise the manuscript and replace the bogus references, but additional errors emerged in the updated bibliography. Facing continued criticism, they ultimately chose to withdraw the paper rather than continue the review process.

What does this incident suggest about the reliability of AI‑driven critiques in the peer‑review process?

The incident highlights that while AI tools can help detect anomalies like fabricated citations, reliance on automated reviewers alone may miss deeper methodological issues. It underscores the need for human oversight to ensure thorough and accurate evaluation of scientific claims.