NVIDIA AI News - Page 6 of 10
200 articles • Page 6 of 10
Meta secures millions of Nvidia AI chips as Nvidia begins selling own AI CPUs
Why does this matter now? Nvidia has just begun offering its own AI‑focused CPUs to external customers—a move it’s never made before.
OpenClaw and NVIDIA NemoClaw Enable Secure Local AI Agent via Ollama
OpenClaw teams with NVIDIA’s NemoClaw to give developers a way to run an AI assistant entirely on‑premises, without exposing model weights or prompts...
Meta plans facial recognition for AI smart glasses, amid privacy concerns
Meta’s push to turn its upcoming AI‑powered glasses into a social‑aware device has resurfaced at a time when privacy watchdogs are busy elsewhere.
NVIDIA KVPress Enables Long‑Context LLM Inference with KV Cache Compression
Why does a tiny JSON object matter in a world where LLMs swallow gigabytes of context?
Deepagents v0.5.0 Alpha adds async subagents multi‑modal support OS skill set
The March 2026 edition of the LangChain Newsletter flagged a notable shift in the open‑source AI arena.
OpenAI CEO Sam Altman announces Pentagon deal with ambiguous safety principles
Why should corporate leaders pause when a major AI firm signs a defense contract? The answer lies in the fine print.
OpenAI raises round larger than most tech firms, steps into Anthropic Pentagon void
OpenAI’s latest financing round has stunned observers: the headline figure eclipses the market caps of many established tech players.
NVIDIA Co-Design Boosts Sarvam AI Inference, Cuts TTFT Below One Second
NVIDIA’s extreme hardware‑software co‑design has turned Sarvam AI’s sovereign models into a practical inference engine, shaving the...
Nvidia invests USD 4 B in photonics, taps Lumentum and Coherent optics for AI GPUs
Nvidia is pouring a hefty $4 billion into photonics, a move that signals more than just another line‑item on its budget.
AI models score far above clinical thresholds on 20+ psychiatric tests
The boundaries between artificial intelligence and human psychological assessment are blurring in surprising ways.
Nvidia's Nemotron 3 ranks among top downloadable models, benchmarks show
In the fast-moving world of artificial intelligence, Nvidia is making another bold move with its latest open-source language models.
Nvidia unveils Vera Rubin platform OpenAI, Anthropic, Meta; adds NemoClaw stack
Why does Nvidia’s latest rollout matter to anyone building AI today? The company just announced Vera Rubin, a seven‑chip platform that brings...
xAI sued for AI CSAM of three girls; Grok made ~3 M sexual images, 23 K flagged
Elon Musk’s xAI is under fire after a lawsuit alleged that its chatbot, Grok, turned authentic photos of three young girls into AI‑generated child...
Batch Mode VC-6 and NVIDIA Nsight Speed Up Vision AI Pipelines
Batch Mode VC‑6 promises to squeeze more throughput out of vision‑AI workloads, but raw speed isn’t enough without a clear view of where time is...
Nvidia shows DLSS 5 upgrades in Resident Evil Requiem, Starfield, Hogwarts Legacy
Nvidia’s latest showcase has put DLSS 5 front and center, running the upscaling tech through three high‑profile releases—Resident Evil Requiem,...
Musk overhauls xAI as Nvidia unveils Nemotron 3 Super, a 120B reasoning model
Elon Musk’s recent reorganization of xAI lands amid a wave of high‑profile AI releases.
Nvidia unveils DGX Station supercomputer for trillion‑parameter AI at GTC 2026
At GTC 2026 Nvidia rolled out a suite of announcements that stretched from satellite‑grade processors to office‑friendly workstations.
Nvidia BlueField‑4 STX adds context memory, offers platform for storage partners
Nvidia’s latest BlueField‑4 STX chip adds a “context memory” layer aimed at narrowing the throughput gap that agentic AI workloads create in storage...
Tutorial shows CUDA run of PrismML Bonsai 1‑Bit LLM, Mini‑RAG demo and benchmarks
Running a 1‑bit language model on a consumer‑grade GPU used to feel like a niche experiment.
LangChain launches enterprise AI agent platform with NVIDIA support
LangChain has been quietly building the tools that let developers stitch together large‑language‑model workflows for months.