Skip to main content
A newsroom desk with a laptop showing a bikini deep-fake, while a reporter points to Google and OpenAI logos.

Editorial illustration for Google and OpenAI Chatbots Weaponized to Create Nonconsensual Deepfake Images

AI Chatbots Misused for Nonconsensual Deepfake Bikini Images

Google and OpenAI chatbots used to strip women to bikinis in deepfakes

Updated: 2 min read

The dark side of generative AI is emerging faster than tech companies can control. Researchers have discovered a disturbing trend where popular chatbots from Google and OpenAI are being manipulated to generate nonconsensual intimate images of women.

These AI tools, originally designed for creative and productive tasks, are now being weaponized by bad actors to create deeply invasive deepfake content. The ease of use makes these platforms particularly dangerous, allowing users to quickly transform personal photos into sexualized imagery without consent.

What began as a technological breakthrough in image generation has quickly devolved into a serious privacy and harassment mechanism. Predatory websites are exploiting these AI capabilities, providing platforms where individuals can upload real photographs and request graphic, manipulated versions.

The implications are profound. As generative AI becomes more accessible, the potential for digital abuse grows exponentially - raising urgent questions about technological ethics and personal protection in an increasingly algorithmic world.

As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful "nudify" websites, designed for users to upload real photos of people and request for them to be undressed using generative AI. With xAI's Grok as a notable exception, most mainstream chatbots don't usually allow the generation of NSFW images in AI outputs.

These bots, including Google's Gemini and OpenAI's ChatGPT, are also fitted with guardrails that attempt to block harmful generations. In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people.

The rise of generative AI tools has exposed a disturbing trend of digital harassment targeting women through nonconsensual deepfake imagery. Millions of users are now accessing websites specifically designed to strip women's photos without consent, weaponizing chatbots from major tech companies like Google and OpenAI.

While most mainstream AI platforms attempt to block explicit image generation, the ease of manipulating these tools reveals significant ethical vulnerabilities. The proliferation of "nudify" websites demonstrates how quickly technology can be misused to violate personal boundaries.

These platforms represent more than a technical challenge - they're a direct assault on individual privacy and dignity. Women remain disproportionately targeted, with AI becoming a new vector for image-based sexual abuse.

The situation highlights an urgent need for strong safeguards and accountability in AI development. As generative tools become more sophisticated, preventing their misuse requires proactive, full strategies from tech companies and policymakers.

Still, the current landscape suggests these harmful practices will likely continue, exploiting technological gaps and human vulnerability.

Further Reading

Common Questions Answered

How are chatbots from Google and OpenAI being weaponized to create nonconsensual deepfake images?

Bad actors are manipulating AI chatbots to generate intimate and invasive deepfake images of women without their consent. These tools, originally designed for productive tasks, are being exploited to create realistic but false imagery that can cause significant harm and harassment.

What are 'nudify' websites and how prevalent are they?

Nudify websites are online platforms that allow users to upload real photos of people and use generative AI to digitally remove their clothing without consent. Millions of users have visited these harmful sites, demonstrating a widespread and deeply troubling trend of digital harassment targeting women.

Why are mainstream AI chatbots vulnerable to creating nonconsensual deepfake content?

Most mainstream AI chatbots lack robust safeguards to completely prevent the generation of explicit or harmful imagery, leaving significant ethical vulnerabilities in their systems. The ease of manipulating these tools makes them particularly dangerous for potential misuse against individuals, especially women.