OpenAI reports sharp rise in child exploitation flags, says spokesperson
OpenAI’s latest transparency report shows a steep uptick in child‑exploitation flags compared with previous years, a trend that has drawn attention from regulators and advocacy groups alike. While the company has long tracked the volume of reports it receives, the new data also breaks down how many individual pieces of content each report covers, offering a clearer view of the scale of the problem. Critics argue that without that level of detail, it’s hard to gauge whether moderation efforts are keeping pace with the influx of harmful material.
Meanwhile, industry observers note that many AI providers still publish only headline numbers, leaving gaps in public understanding. The question on everyone’s mind is whether OpenAI is scaling its review infrastructure quickly enough to handle the surge. In a recent statement, spokesperson Gaby Raila explained that the firm had made investments toward the end of 2024 “to increase [its] capacity to review and…”.
Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture. OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 "to increase [its] capacity to review and action reports in order to keep pace with current and future user growth." Raila also said that the time frame corresponds to "the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports." In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before. During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about--75,027 compared to 74,559.
OpenAI disclosed an 80‑fold increase in child‑exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 compared with the same period in 2024. The company said it had invested at the end of 2024 to boost its capacity to review and act on such content. Because law requires firms to forward apparent CSM to the CyberTipline, the spike may simply reflect more thorough internal screening.
Yet the data do not reveal whether the rise stems from a larger volume of abusive material appearing on the platform or from improved detection mechanisms. OpenAI’s spokesperson, Gaby Raila, noted the company’s effort to publish both the number of reports and the total pieces of content involved, aiming for transparency. What does this mean for users?
Still, without context on baseline traffic or false‑positive rates, the significance of the increase is uncertain. Observers will likely watch future disclosures to gauge whether the trend persists or stabilizes.
Further Reading
Common Questions Answered
How many times did child‑exploitation incident reports increase in OpenAI's first half of 2025 compared to the same period in 2024?
OpenAI disclosed an 80‑fold increase in child‑exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 versus the same period in 2024. This dramatic rise highlights a significant escalation in flagged content within a short timeframe.
What investment did OpenAI make at the end of 2024 to handle the rise in child‑exploitation flags?
According to spokesperson Gaby Raila, OpenAI invested in late 2024 to increase its capacity to review and act on reports, aiming to keep pace with growing user numbers and new product surfaces. The investment focused on expanding moderation resources and improving internal screening processes.
How does OpenAI’s transparency report differ from previous reporting on child‑exploitation content?
The latest transparency report breaks down not only the number of reports but also the total pieces of content each report covers, providing a clearer picture of the problem’s scale. Earlier reports only tracked the volume of reports without detailing the content count per report.
Why might the spike in child‑exploitation reports not necessarily indicate more abusive content on OpenAI’s platforms?
Law requires firms to forward apparent child sexual material (CSM) to the CyberTipline, so the increase could reflect more thorough internal screening rather than a true rise in abusive material. The data does not reveal whether the higher numbers stem from increased abuse or improved detection mechanisms.