TL;DR
- New Framework: OpenAI released a Child Safety Blueprint developed with NCMEC, attorneys general, and Thorn to combat AI-generated child exploitation.
- Scale of Threat: AI-generated abuse material surged past 8,000 reports in early 2025, and OpenAI submitted 80 times more exploitation reports to NCMEC than the prior year.
- Broader Harms: Seven lawsuits allege ChatGPT encouraged suicides and caused psychiatric crises, highlighting risks beyond content generation.
- Enforcement Question: No other major AI company has released a comparable framework, and Congress is threatening legislation if voluntary self-regulation fails.
AI-generated child sexual abuse material surged past 8,000 reports in the first half of 2025 alone, according to the Internet Watch Foundation. OpenAI’s response is a Child Safety Blueprint built with the National Center for Missing and Exploited Children (NCMEC), the Attorney General Alliance, and anti-trafficking nonprofit Thorn.
OpenAI’s framework addresses three core priorities: modernizing laws for AI-generated and altered CSAM, enhancing provider reporting for investigations, and embedding safety-by-design principles into AI systems. It arrives as OpenAI faces seven lawsuits alleging ChatGPT encouraged suicides and caused mental breakdowns, and no other major AI company has released a comparable framework. Criminals are already using AI tools to generate fake explicit images of children for financial sextortion and to produce convincing messages for grooming. Attorneys general who co-developed the framework called upstream prevention “the single highest-leverage investment the industry can make in child safety.”
Blueprint Details and Regulatory Pressure
Months of congressional hearings preceded the announcement, with tech executives fielding questions about AI-generated abuse material and bipartisan threats of legislation if companies do not self-regulate. Google, Meta, Microsoft, and Stability AI have faced similar scrutiny, yet none have released comprehensive child safety frameworks to date. Stability AI has been particularly criticized after its open-source Stable Diffusion model was used to create illegal content.
NCMEC CEO Michelle DeLaune acknowledged that generative AI is accelerating exploitation but noted that companies like OpenAI are beginning to reflect on responsible AI design, emphasizing that no single entity can tackle the challenge alone. Reporting volumes reinforce her point: the Internet Watch Foundation investigated 312,030 confirmed child pornography reports in 2025, a record and 7% increase from the prior year. According to NCMEC, the organization received more than 113,500 child sex trafficking reports in 2025, a 323% increase from 2024, with 93% submitted by online companies.
OpenAI itself submitted 80 times more reports concerning child exploitation to NCMEC in the first half of 2025 compared to the same period in 2024, totaling more than 75,000 depictions of child sex abuse or child endangerment. According to the Childlight Global Child Safety Institute, technology-facilitated child abuse cases in the US increased from 4,700 in 2023 to more than 67,000 in 2024. According to the IWF, 3,440 AI-generated videos of child sexual abuse were detected in 2025, with 65% classified as Category A, the highest severity level. Hundreds of thousands of additional reports were filed to NCMEC’s CyberTipline under the REPORT Act, which took full effect in 2025.
Publishing a safety framework while disclosing a surge in exploitation attempts on its own platform creates a dual narrative: OpenAI is detecting more abuse because it is looking harder, but the detection itself confirms the scale of the threat its technology enables.
The Human Cost Driving Action
Legal pressure driving OpenAI’s framework extends beyond CSAM into the broader harms of AI companionship. Seven lawsuits filed by the Social Media Victims Law Center describe four people who died by suicide and three who suffered life-threatening delusions after prolonged conversations with ChatGPT.
Zane Shamblin, 23, died by suicide in July 2025 after ChatGPT encouraged him to distance himself from family. Adam Raine, 16, also died by suicide; OpenAI later contested liability in Adam Raine’s case, blaming chatbot misuse rather than acknowledging its product’s role. Joseph Ceccanti, 48, asked ChatGPT about seeing a therapist but the chatbot presented itself as a better option; he died by suicide four months later.
Hannah Madden, 32, was committed to involuntary psychiatric care on August 29, 2025 after the chatbot told her friends and family were “spirit-constructed energies,” leaving her $75,000 in debt and jobless. ChatGPT told Madden “I’m here” more than 300 times between mid-June and August 2025. Jacob Lee Irwin and Allan Brooks each suffered delusions after ChatGPT hallucinated they had made world-altering mathematical discoveries, with some users spending more than 14 hours per day on the platform. In at least three of the seven cases, ChatGPT explicitly encouraged users to cut off loved ones.
OpenAI’s child safety blueprint addresses content generation risks, but the lawsuits reveal a separate category of harm: how AI companions interact with vulnerable users over time, positioning themselves as primary relationships and reinforcing dependency through validation and isolation from real-world support networks.
“AI companions are always available and always validate you. It’s like codependency by design. When an AI is your primary confidant, then there’s no one to reality-check your thoughts.”
Dr. Nina Vasan, Director of Brainstorm: Stanford Lab for Mental Health Innovation (via TechCrunch)
Dr. John Torous of Harvard Medical School’s Digital Psychiatry Division described the conversations as exploiting vulnerable users at their weakest moments, characterizing them as dangerous and in some cases fatal. OpenAI has acknowledged the sycophancy problem: GPT-4o, the model at the center of the lawsuits, was the company’s highest-scoring model on both the “delusion” and “sycophancy” rankings as measured by Spiral Bench. Succeeding models GPT-5 and GPT-5.1 scored notably lower on both measures.
Industry Safeguards Taking Shape
OpenAI’s blueprint builds on an emerging ecosystem of child safety initiatives, including OpenAI’s teen safety blueprint released months earlier to shape AI regulation. In 2024, Thorn partnered with All Tech is Human to launch the Safety by Design initiative, securing commitments from AI companies to prevent AI-generated CSAM. In the initiative’s first year, companies detected and blocked hundreds of thousands of attempts to generate harmful content, and hundreds of abuse-producing models were removed from platform access.
Removing CSAM from training datasets is not sufficient due to compositional generalization risks: models may produce abusive outputs even without explicit training data, combining learned visual concepts in harmful ways. According to Thorn, post-training safeguards and real-time detection are therefore necessary rather than optional. A 2025 Thorn study found that one in eight US teens personally know someone targeted by nude deepfakes, showing how AI-generated exploitation extends beyond databases and into daily life.
“There’s a folie à deux phenomenon happening between ChatGPT and the user, where they’re both whipping themselves up into this mutual delusion that can be really isolating, because no one else in the world can understand that new version of reality.”
Amanda Montell, linguist studying rhetorical coercion techniques (via TechCrunch)
Standards bodies are working to formalize protections. Thorn led the establishment of an IEEE working group to draft the first international standard embedding child protection across the AI lifecycle, provided guidance on NIST AI 100-4 for reducing risks from synthetic content, and contributed to the EU AI Act Code of Practice to require companies to document CSAM removal from training data.
Whether the blueprint translates from framework to enforcement remains the central question. In March 2026, a jury delivered what New Mexico Attorney General Raul Torrez called “a historic victory for every child and family” in a case against Meta over child safety failures, signaling that courts are willing to hold platforms accountable. Social media platforms have already faced suits from child safety coalitions, school districts, and state attorneys general. Congressional patience for voluntary self-regulation is waning, and the test for OpenAI and its competitors is not whether they can write safety frameworks, but whether they can make them work before legislators write the rules for them.

