Gemini app watermarks 20 billion AI images with SynthID, tests Detector
Google’s Gemini app now embeds a hidden signature in every image it creates, a move aimed at curbing the spread of unmarked AI‑generated visuals. The feature, called SynthID, works like a digital fingerprint that survives compression, resizing and even reposting on social platforms. In practice, the watermark is invisible to the naked eye but can be read by a dedicated tool that checks the image’s metadata.
Since the system’s launch, the company reports that billions of pictures have been stamped with this identifier, and a separate verification portal is already in limited use. Early adopters include a handful of newsrooms and freelance journalists who are experimenting with the detector to see whether a picture truly originated from Google’s models. The idea is to give media professionals a straightforward way to confirm authenticity before publishing.
If you come across a picture and need to know whether it was produced by Gemini, you can now upload it to the portal for a quick check.
Since then, over 20 billion AI-generated pieces of content have been watermarked using SynthID, and we have been testing our SynthID Detector, a verification portal, with journalists and media professionals. How it works If you see an image and want to confirm it has been made by Google AI, upload it to the Gemini app and ask a question such as: "Was this created with Google AI?" or "Is this AI-generated?" Gemini will check for the SynthID watermark and use its own reasoning to return a response that gives you more context about the content you encounter online.
Can users trust the new verification feature? Google says the Gemini app now tags AI‑generated images with a Synth ID watermark, and a dedicated Detector lets anyone upload a picture to check its origin. Since the rollout, more than 20 billion pieces of content have been watermarked, and the company has been trialling the Detector with journalists and media professionals.
The process is straightforward: you see an image, you upload it, and the system reports whether Google’s AI created or edited it. Yet, the article does not explain how accurate the detector is, nor how it handles images that have been altered after the watermark was applied. Moreover, it is unclear whether the tool can identify content generated by other AI systems that lack the Synth ID tag.
The initiative reflects Google’s broader push to embed context into online media, but its real‑world impact will depend on adoption and on how reliably the detector distinguishes authentic from manipulated visuals. Until independent evaluations are published, the effectiveness of the verification remains uncertain.
Further Reading
- SynthID - Google DeepMind - Google DeepMind
- Gemini 2.5 Flash Image (Nano Banana) | Google AI Studio - Google AI Studio
- Papers with Code - Latest NLP Research - Papers with Code
- Hugging Face Daily Papers - Hugging Face
- ArXiv CS.CL (Computation and Language) - ArXiv
Common Questions Answered
What is SynthID and how does it embed a watermark in images created by the Gemini app?
SynthID is a hidden digital fingerprint that Google’s Gemini app adds to every AI‑generated image. The watermark is invisible to the eye, survives compression, resizing, and reposting, and can be read by the SynthID Detector tool.
How many pieces of AI‑generated content have been watermarked with SynthID since the feature launched?
Google reports that more than 20 billion AI‑generated images and other content have been stamped with the SynthID watermark. This figure reflects the volume of media produced by the Gemini app since the rollout.
What steps must a user take to verify whether an image was created by Google AI using the SynthID Detector?
The user uploads the image to the Gemini app and asks a question such as “Was this created with Google AI?” or “Is this AI‑generated?”. The app then scans the image for the hidden SynthID watermark and returns a verification result.
Who has Google been testing the SynthID Detector with, and why is this important for media professionals?
Google has been trialing the Detector with journalists and other media professionals to ensure reliable attribution of AI‑generated visuals. This testing helps build trust in the verification process and supports responsible reporting on synthetic media.