Editorial illustration for Nvidia CEO Jensen Huang says AI stops hallucinating, then hallucinates himself
Nvidia CEO: AI Hallucinations Remain Years Away
Nvidia CEO Jensen Huang says AI stops hallucinating, then hallucinates himself
Why does this matter? Because the term “hallucination” has become a shorthand for a persistent flaw in large language models—outputs that sound plausible but are factually off. In a recent CNBC interview, Nvidia’s founder‑CEO Jensen Huang asserted that generative AI is “no longer hallucinating.” The claim sparked immediate pushback from researchers who say the problem is baked into the way these models predict text.
While the tech is impressive, the underlying architecture still produces erroneous statements without external verification. Here’s the thing: Huang’s confidence contrasts sharply with the consensus that hallucinations remain a structural issue, not a bug that can be patched away. The tension between corporate optimism and technical reality is why his remark draws scrutiny.
The partnership signals more than market hype; it highlights a gap between headline promises and the day‑to‑day reliability of AI outputs. Below, Huang’s own words illustrate that gap.
Nvidia CEO Jensen Huang claims AI no longer hallucinates, apparently hallucinating himself Key Points - Nvidia CEO Jensen Huang claimed in a CNBC interview that generative AI is "no longer hallucinating" -- a statement that is factually incorrect. - Hallucinations remain a fundamental, structural problem rooted in the probability-based architecture of language models, with no technical breakthrough to support Huang's assertion. - Solving the hallucination problem would represent a transformative shift for the entire AI industry.
The interview was published as Nvidia's major customers Meta, Amazon, and Google faced stock market pressure over plans to invest additional billions in AI infrastructure. Anyone who thinks AI is in a bubble might feel vindicated by a recent CNBC interview with Nvidia CEO Jensen Huang.
Did Jensen Huang really think the problem has vanished? In a CNBC interview he declared generative AI “no longer hallucinating,” a claim the article says is factually incorrect. Hallucinations, however, persist as a structural issue tied to the probability‑based design of language models.
No evidence of a technical breakthrough appears in the record, and experts note the challenge remains entrenched. Solving the hallucination problem would be transformative, yet the piece offers no indication that such a solution is imminent. Consequently, the CEO’s statement seems at odds with the current understanding of model behavior.
While Nvidia continues to push forward with powerful hardware, the underlying software limitation is unchanged. Readers should note that the article does not provide data to verify Huang’s assertion, and it remains unclear whether any near‑term fix is on the horizon. Until concrete progress is demonstrated, skepticism about the claim is warranted.
The interview did not include technical details. The claim therefore lacks corroboration.
Further Reading
- Papers with Code - Latest NLP Research - Papers with Code
- Hugging Face Daily Papers - Hugging Face
- ArXiv CS.CL (Computation and Language) - ArXiv
Common Questions Answered
What did OpenAI claim about citation hallucinations in GPT-5?
[nature.com](https://www.nature.com/articles/d41586-025-02853-8) reports that OpenAI claimed to have reduced the frequency of fake citations in GPT-5. The company specifically noted improvements in reducing 'hallucinations' and 'deceptions' where AI previously claimed to have performed tasks it hadn't actually completed.
Why are AI hallucinations difficult to completely eliminate?
[nature.com](https://www.nature.com/articles/d41586-025-00068-5) explains that large language models are trained to predict tokens from text corpora, which means factual knowledge is implicitly stored in model parameters rather than an explicit fact database. This inherent design makes it challenging to completely prevent hallucinations, especially for less common or more nuanced information.
What types of hallucinations do large language models typically produce?
[arxiv.org](https://arxiv.org/abs/2510.06265) describes hallucinations as AI-generated content that is fluent and syntactically correct but factually inaccurate or unsupported by external evidence. These hallucinations can range from slightly misremembered facts to completely fabricated references, undermining the reliability of AI systems in domains requiring high factual accuracy.