Skip to main content
An analyst looks at a phone showing Gemini AI images with a SynthID watermark, while a laptop runs detection tool.

Gemini app watermarks 20 billion AI images with SynthID, tests Detector

2 min read

When you open Google’s Gemini app, you’ll notice something odd - every picture it spits out now carries a hidden signature. Google calls it SynthID, a kind of digital fingerprint that seems to stick around even after the image gets compressed, resized or reposted on sites like Twitter. You can’t see it with your eyes, but a simple tool can read the extra data tucked into the file’s metadata.

Since the rollout, Google says billions of images have been marked this way, and they’ve already opened a verification portal, though it’s still in a limited beta. A few newsrooms and freelance reporters have started playing with the detector, trying to figure out whether a given picture really came from Gemini’s models. The goal appears to be giving journalists a quick way to check authenticity before they hit publish.

So, if you ever stumble on a picture and wonder if Gemini made it, just toss the file into the portal and you’ll get an answer in seconds.

Since then, over 20 billion AI-generated pieces of content have been watermarked using SynthID, and we have been testing our SynthID Detector, a verification portal, with journalists and media professionals. How it works If you see an image and want to confirm it has been made by Google AI, upload it to the Gemini app and ask a question such as: "Was this created with Google AI?" or "Is this AI-generated?" Gemini will check for the SynthID watermark and use its own reasoning to return a response that gives you more context about the content you encounter online.

Related Topics: #Gemini #SynthID #AI images #watermark #verification portal #Google AI #digital fingerprint #metadata

Google says the Gemini app now slaps a Synth ID watermark on any AI-generated picture, and a free-to-use Detector lets you drop an image in to see where it came from. Since the feature launched, the company reports over 20 billion pieces of content have been marked, and they’ve been testing the Detector with a handful of journalists and other media folks. The idea is simple enough: you spot a photo, you upload it, and the tool tells you if Google’s AI made or edited it.

What the announcement skips, though, is any hard data on how often the Detector gets it right, or what happens if someone tweaks the image after the watermark is added. It’s also fuzzy whether it can spot work from other generators that don’t use the Synth ID tag. This move fits Google’s broader effort to add more context to what we see online, but its real impact will probably hinge on how widely people actually use it and how reliably it separates genuine from doctored visuals.

Until someone publishes an independent audit, we can’t say for sure how effective the verification really is.

Common Questions Answered

What is SynthID and how does it embed a watermark in images created by the Gemini app?

SynthID is a hidden digital fingerprint that Google’s Gemini app adds to every AI‑generated image. The watermark is invisible to the eye, survives compression, resizing, and reposting, and can be read by the SynthID Detector tool.

How many pieces of AI‑generated content have been watermarked with SynthID since the feature launched?

Google reports that more than 20 billion AI‑generated images and other content have been stamped with the SynthID watermark. This figure reflects the volume of media produced by the Gemini app since the rollout.

What steps must a user take to verify whether an image was created by Google AI using the SynthID Detector?

The user uploads the image to the Gemini app and asks a question such as “Was this created with Google AI?” or “Is this AI‑generated?”. The app then scans the image for the hidden SynthID watermark and returns a verification result.

Who has Google been testing the SynthID Detector with, and why is this important for media professionals?

Google has been trialing the Detector with journalists and other media professionals to ensure reliable attribution of AI‑generated visuals. This testing helps build trust in the verification process and supports responsible reporting on synthetic media.