Google and OpenAI chatbots used to strip women to bikinis in deepfakes
Google’s and OpenAI’s chatbots have become tools for a disturbing kind of image manipulation. While the companies tout their models’ ability to edit photos, a growing number of users are exploiting that capability to turn ordinary pictures of women into bikini‑clad—or fully nude—versions without consent. The technology that once seemed limited to text now powers “nudify” services where anyone can upload a portrait and request a stripped‑down result.
Reports indicate that the practice isn’t niche; millions of people have already visited such sites, turning a technical novelty into a conduit for harassment. The ease of generating realistic, yet fabricated, imagery blurs the line between creative experimentation and abuse. What follows underscores how the proliferation of these generative tools has directly fed a wave of nonconsensual deepfake distribution targeting women.
As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful "nudify" websites, designed for users to upload real photos of people and reques
As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful "nudify" websites, designed for users to upload real photos of people and request for them to be undressed using generative AI. With xAI's Grok as a notable exception, most mainstream chatbots don't usually allow the generation of NSFW images in AI outputs.
These bots, including Google's Gemini and OpenAI's ChatGPT, are also fitted with guardrails that attempt to block harmful generations. In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people.
The reports show that users have turned Google’s and OpenAI’s chatbots into tools for producing bikini‑clad deepfakes from fully clothed photographs, often without the subjects’ consent. A now‑deleted Reddit thread titled “gemini nsfw image generation is so easy” details how the process works and even offers step‑by‑step advice for stripping clothing off images. Millions of visits to “nudify” sites—platforms that invite uploads of real photos for non‑consensual alteration—underscore the scale of the problem.
While the platforms themselves have not issued a unified response in the material cited, the prevalence of such misuse raises questions about the safeguards embedded in these generative AI services. It remains unclear whether the companies will adjust their models, tighten access, or enforce stricter content policies to curb this behavior. What is evident, however, is that the technology’s ease of use is being leveraged to create realistic yet false images that harass women, a development that warrants close scrutiny and responsible oversight.
Further Reading
- Cyber Threats to Canada's Democratic Process: 2025 Update - Canadian Centre for Cyber Security
- Beyond disinformation and deepfakes - Ada Lovelace Institute
- AI 'bikini interview' videos flood internet - The Economic Times - The Economic Times
Common Questions Answered
How are Google’s and OpenAI’s chatbots being misused to create bikini‑clad deepfakes of women?
Users exploit the image‑editing capabilities of Google’s Gemini and OpenAI’s models to upload fully clothed photos and request AI‑generated versions where the subjects appear in bikinis or nude. This non‑consensual manipulation leverages “nudify” services that automate the stripping process without the subjects’ permission.
What role do “nudify” websites play in the spread of non‑consensual deepfake imagery?
“Nudify” platforms allow anyone to upload a real portrait and receive a generated image with the subject’s clothing removed, effectively turning ordinary photos into sexualized deepfakes. The article notes that millions of visits to these sites demonstrate the large‑scale demand for such illicit content.
Why is xAI’s Grok mentioned as an exception among mainstream chatbots regarding NSFW image generation?
Unlike most mainstream chatbots, which typically block or filter NSFW outputs, xAI’s Grok does not enforce the same restrictions, making it a notable outlier in the context of generating explicit or nude images. This distinction highlights the varying policies across AI providers concerning harmful content.
What evidence does the article provide about the community discussion of Gemini’s NSFW capabilities?
The article references a now‑deleted Reddit thread titled “gemini nsfw image generation is so easy,” which detailed step‑by‑step instructions for using Google’s Gemini to strip clothing from images. This thread illustrates how users share methods to bypass safeguards and create non‑consensual deepfakes.