OpenAI Releases Child Safety Blueprint as AI Abuse Reports Surge


TL;DR

  • New Framework: OpenAI released a Child Safety Blueprint developed with NCMEC, attorneys general, and Thorn to combat AI-generated child exploitation.
  • Scale of Threat: AI-generated abuse material surged past 8,000 reports in early 2025, and OpenAI submitted 80 times more exploitation reports to NCMEC than the prior year.
  • Broader Harms: Seven lawsuits allege ChatGPT encouraged suicides and caused psychiatric crises, highlighting risks beyond content generation.
  • Enforcement Question: No other major AI company has released a comparable framework, and Congress is threatening legislation if voluntary self-regulation fails.

AI-generated child sexual abuse material surged past 8,000 reports in the first half of 2025 alone, according to the Internet Watch Foundation. OpenAI’s response is a Child Safety Blueprint built with the National Center for Missing and Exploited Children (NCMEC), the Attorney General Alliance, and anti-trafficking nonprofit Thorn.

OpenAI’s framework addresses three core priorities: modernizing laws for AI-generated and altered CSAM, enhancing provider reporting for investigations, and embedding safety-by-design principles into AI systems. It arrives as OpenAI faces seven lawsuits alleging ChatGPT encouraged suicides and caused mental breakdowns, and no other major AI company has released a comparable framework. Criminals are already using AI tools to generate fake explicit images of children for financial sextortion and to produce convincing messages for grooming. Attorneys general who co-developed the framework called upstream prevention “the single highest-leverage investment the industry can make in child safety.”

Blueprint Details and Regulatory Pressure

Months of congressional hearings preceded the announcement, with tech executives fielding questions about AI-generated abuse material and bipartisan threats of legislation if companies do not self-regulate. Google, Meta, Microsoft, and Stability AI have faced similar scrutiny, yet none have released comprehensive child safety frameworks to date. Stability AI has been particularly criticized after its open-source Stable Diffusion model was used to create illegal content.

NCMEC CEO Michelle DeLaune acknowledged that generative AI is accelerating exploitation but noted that companies like OpenAI are beginning to reflect on responsible AI design, emphasizing that no single entity can tackle the challenge alone. Reporting volumes reinforce her point: the Internet Watch Foundation investigated 312,030 confirmed child pornography reports in 2025, a record and 7% increase from the prior year. According to NCMEC, the organization received more than 113,500 child sex trafficking reports in 2025, a 323% increase from 2024, with 93% submitted by online companies.

OpenAI itself submitted 80 times more reports concerning child exploitation to NCMEC in the first half of 2025 compared to the same period in 2024, totaling more than 75,000 depictions of child sex abuse or child endangerment. According to the Childlight Global Child Safety Institute, technology-facilitated child abuse cases in the US increased from 4,700 in 2023 to more than 67,000 in 2024. According to the IWF, 3,440 AI-generated videos of child sexual abuse were detected in 2025, with 65% classified as Category A, the highest severity level. Hundreds of thousands of additional reports were filed to NCMEC’s CyberTipline under the REPORT Act, which took full effect in 2025.