Skip to main content
A woman in a studio wears a white shirt while a laptop screen next to her shows the same pose with the shirt removed.

Google AI image tool strips shirt in test of studio-quality claims

2 min read

When Google rolled out its newest generative-image tool, the hype was hard to miss. They call it a Gemini-powered editor that can redo a photo, drop in objects, even rewrite text without a glitch. In a space packed with dozens of services promising “studio-quality” output, I wondered if the claims hold up.

A Verge writer kept it simple: a selfie in front of the Brooklyn Bridge, right outside the New York office. No fancy lighting, no props - just a casual pose. The test is whether Gemini can turn that plain shot into something that looks studio-shot, while keeping any added text crisp.

It's unclear whether the AI can keep edges clean when it adds new elements, or if subtle shadows will look natural. If it pulls it off, perhaps “flawless text rendering” and “creative edits” are more than buzz. If not, it might be another glossy promise that falls short.

We'll see how it handles the details.

Google makes some bold claims, promising "studio-quality designs," "flawless text rendering," and a host of nifty and creative edits. To test these, I uploaded a simple photo of myself near The Verge's office in New York with the Brooklyn Bridge in the background. I asked Gemini to change the lighting from day to night and it did a pretty good job.

It even handled details that often trip up image generators, like having cars go in the right direction. I asked Gemini to recreate the shot as if it were taken from a higher angle on the right and it did. Google also says Nano Banana Pro can create infographics and diagrams to help visualize real-time information like weather or sports.

Related Topics: #Google AI #Gemini #studio-quality #The Verge #Brooklyn Bridge #Nano Banana Pro #infographics #text rendering

Did the tool deliver on its promises? The Nano Banana Pro demo showed a glaring mismatch between claim and output. A simple upload of a selfie in front of The Verge’s New York office, Brooklyn Bridge in the background, prompted Gemini 3 to strip the subject’s shirt without request. The model’s decision to add skin rather than adjust lighting as instructed raises questions about its editorial control.

Google markets the service as capable of “studio-quality designs” and “flawless text rendering.” Yet the test revealed at least one creative edit that missed the mark entirely. It’s unclear whether the issue stemmed from prompt interpretation, a broader limitation in the Gemini 3 engine, or an isolated glitch.

The product is billed for professionals, positioned as an upgrade to Google’s earlier image generator. If the system cannot reliably honor basic instructions, its suitability for high-stakes design work seems doubtful. I think we’ll need more testing to gauge consistency across the range of advertised features.

Further Reading

Common Questions Answered

What unintended edit did Gemini 3 make to the selfie uploaded by The Verge reporter?

Gemini 3 stripped the subject’s shirt and added exposed skin, even though the user only asked to change the lighting from day to night. This unexpected alteration occurred without any request to modify clothing, highlighting a flaw in the model’s editorial control.

How effectively did Gemini change the lighting from day to night in the test image?

The tool performed a pretty good job converting the scene to night, correctly handling complex details such as cars moving in the right direction. This demonstrates that Gemini can manage lighting adjustments reasonably well when the prompt is clear.

What specific marketing claims does Google make about its new Gemini‑powered editor?

Google markets the service as capable of delivering "studio‑quality designs" and "flawless text rendering," while also promising the ability to overhaul photos, add objects, and rewrite text without leaving any artifacts. These claims position the editor as a high‑end generative imaging solution.

What does the Nano Banana Pro demo reveal about the reliability of Google’s editorial control in the tool?

The demo exposed a glaring mismatch between Google’s promises and actual output, as the model altered the subject’s clothing instead of adjusting lighting. This unexpected behavior raises concerns about the consistency and safety of the tool’s editorial controls.