Editorial illustration for Anthropic's Claude also citing Elon Musk's Grokipedia, reports say
LLMs Cite Grokipedia: AI's New Content Sourcing Trend
Anthropic's Claude also citing Elon Musk's Grokipedia, reports say
Why does it matter when a large‑language model leans on a single, user‑generated wiki for its answers? While ChatGPT isn’t the only chatbot pulling information from Elon Musk’s Grokipedia, the pattern is emerging across the field. The Verge’s recent probe into citation practices revealed a patchwork of transparency—some firms openly log their sources, others remain silent.
In the meantime, social‑media chatter hints that Anthropic’s Claude may be joining the trend, echoing niche or highly specific queries with references that trace back to the same crowd‑sourced site. Here’s the thing: without systematic tracking, it’s hard to gauge how often these models rely on Grokipedia versus more established databases. The lack of hard data leaves researchers and users guessing about the reliability of the answers they receive.
That uncertainty sets the stage for the observation that follows.
None of the firms The Verge spoke to track citations for Anthropic's Claude, though several anecdotal reports on social media suggest the chatbot is also citing Grokipedia as a source. In many cases, AI tools appear to be citing Grokipedia to answer niche, obscure, or highly specific factual questions, as The Guardian reported late last week. Jim Yu, CEO of analytics firm BrightEdge, told The Verge that ChatGPT and AI Overviews use Grokipedia for largely "non-sensitive queries" like encyclopedic lookups and definitions, though differences are emerging in how much authority they afford it.
For AI Overviews, Grokipedia tends not to stand alone, Yu said, and "typically appears alongside several other sources" as "a supplementary reference rather than a primary source." When ChatGPT uses Grokipedia as a source, however, it gives it much more authority, Yu said, "often featuring it as one of the first sources cited for a query." Even for relatively mundane uses, experts warn using Grokipedia as a source risks spreading disinformation and promoting partisan talking points. Unlike Wikipedia, which is edited by humans in a transparent process, Grokipedia is produced by xAI's chatbot Grok.
The pattern is clear: multiple chatbots are pulling from Grokipedia. ChatGPT, Google’s Gemini, AI Mode, AI Overviews, Perplexity and even Microsoft‑branded tools have begun to surface citations to Elon Musk’s AI‑generated encyclopedia. Short, niche queries often trigger those references, suggesting the source is being used for obscure facts.
Yet none of the firms The Verge spoke with actually track citations for Anthropic’s Claude, and the only evidence of its use comes from anecdotal social‑media reports. This lack of systematic monitoring leaves a gap in understanding how frequently Claude relies on Grokipedia. Data indicates the practice is on the rise, which in turn heightens worries about accuracy and the potential spread of misinformation, especially as Musk’s project aims to reshape reality in his image.
Whether the citations improve answer quality or merely amplify a single perspective remains uncertain. As the ecosystem of AI assistants expands, the need for transparent sourcing and independent verification becomes increasingly apparent, even if the current picture is still incomplete.
Further Reading
- ChatGPT is pulling answers from Elon Musk's Grokipedia - TechCrunch
- Elon Musk's Grokipedia is getting cited by OpenAI's ChatGPT - Teslarati
- Elon Musk's 'Grokipedia' cites Wikipedia as a source, even as he criticizes it - Fortune
- Could Elon Musk's Grokipedia Mean Trouble for Wikipedia? - Northeastern University News
Common Questions Answered
How is ChatGPT using Elon Musk's Grokipedia as a source?
[theguardian.com](https://www.theguardian.com/technology/2026/jan/24/latest-chatgpt-model-uses-elon-musks-grokipedia-as-source-tests-reveal) reported that GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. These citations often appeared when querying obscure topics, such as political structures in Iran or biographical details about specific individuals.
What makes Grokipedia different from Wikipedia?
[theverge.com](https://www.theverge.com/news/807686/elon-musk-grokipedia-launch-wikipedia-xai-copied) revealed that unlike Wikipedia, Grokipedia does not allow direct human editing. Instead, an AI model writes content and responds to requested changes, and some of its pages are even directly adapted from Wikipedia content.
Are AI models working to reduce citation hallucinations?
[nature.com](https://www.nature.com/articles/d41586-025-02853-8) reported that OpenAI claims to have reduced the frequency of fake citations and 'hallucinations' in GPT-5. Companies like Anthropic are also developing new APIs and techniques to improve citation accuracy and reduce misinformation in AI-generated responses.