Skip to main content
Apple logo, Grok, and X logos with a warning symbol, representing App Store removal threat over sexual deepfakes.

Editorial illustration for Apple warned Grok and X over sexual deepfakes, threatened App Store removal

Apple Warns Grok, X Over Dangerous AI Deepfake Risks

Apple warned Grok and X over sexual deepfakes, threatened App Store removal

3 min read

When users discovered that AI chat services were generating non‑consensual sexual imagery, the fallout spilled onto Apple’s tightly controlled marketplace. Both Grok and X faced a surge of complaints after news outlets highlighted the deepfake scandal, prompting the tech giant to intervene. Apple’s compliance team reached out to the developers, warning that the content violated the App Store’s standards and that continued violations could trigger a takedown.

The move underscores the company’s growing willingness to police AI‑driven products that cross ethical lines, especially as legislators scramble to understand the technology’s societal impact. By involving US senators, Apple signaled that the issue has risen beyond a simple policy breach to a matter of public concern. The correspondence, obtained by NBC News, lays out the expectations placed on the two platforms and hints at the next steps should they fail to act.

**Apple quietly asked developers to fix the problem or face removal from the App Store. In a letter obtained by NBC News, Apple told US senators it "contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal" and demanded that the developers "create a**

Apple quietly asked developers to fix the problem or face removal from the App Store. In a letter obtained by NBC News, Apple told US senators it "contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal" and demanded that the developers "create a plan to improve content moderation." At the time, xAI's chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and "undress" images of real people, disproportionately women and some of them apparently minors. As we reported at the time, these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist.

Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter. Apple said it reviewed proposed changes to the X and Grok apps.

While the company concluded X had "substantially resolved its violations," Grok "remained out of compliance." Apple said it warned the developer that "additional changes to remedy the violation would be required, or the app could be removed from the App Store." Only after further back and forth did Apple determine Grok had "substantially improved" and approved its submission. Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women.

Apple’s letter to the Grok team arrived in January, warning that continued distribution of non‑consensual sexual deepfakes could trigger removal from the App Store. Did the company intend to make an example of the AI‑driven app, or simply enforce its existing policies? In the same communication, Apple reached out to X, citing recent complaints and media coverage of the “undressing” scandal.

The request asked developers to create a … Apple told senators it had contacted both parties, emphasizing the need for a fix. Critics have already labeled the move as timid, arguing that a behind‑the‑scenes warning doesn’t do much to protect users. Yet the threat of removal remains a concrete lever, even if the exact remediation steps were not disclosed.

Unclear whether Grok will implement the demanded changes before a potential ban is enforced. The episode underscores how Apple’s gatekeeping role can intersect with emerging AI misuse, but the outcome still hangs in the balance.

Further Reading

Common Questions Answered

What action did Apple take against Grok and X regarding sexual deepfakes?

Apple contacted the developers of Grok and X, warning them about non-consensual sexual deepfakes generated on their platforms. The tech giant demanded that both companies create a plan to improve content moderation or face potential removal from the App Store.

How did Apple become aware of the sexual deepfake issues on Grok and X?

Apple learned about the sexual deepfake problems through user complaints and media coverage highlighting the scandal. The company's compliance team then reached out to the developers, obtaining a letter that was subsequently shared with US senators.

What were the potential consequences for Grok and X if they did not address the deepfake content?

If Grok and X failed to improve their content moderation, Apple threatened to remove their apps from the App Store. This action would effectively block the apps from distribution to iOS users, representing a significant potential penalty for the platforms.