|
Getting your Trinity Audio player ready...
|
OpenAI has published a policy blueprint outlining new safety measures the artificial intelligence (AI) industry can take to prevent the use of the technology to create child sexual abuse material.
“Child sexual exploitation is one of the most urgent challenges of the digital age,” OpenAI said in its April 8 announcement. “AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale.”
With this in mind, the company revealed its policy blueprint, outlining “a practical path forward for strengthening U.S. child protection frameworks in the age of AI.”
The blueprint reflects and incorporates feedback from several leading organizations and experts across the child safety ecosystem, including the National Center for Missing & Exploited Children (NCMEC), the Attorney General Alliance—a nonprofit group of state attorneys general in the United States—and Thorn, a nonprofit dedicated to defending children from sexual abuse, according to the AI giant.
“No single intervention can address this challenge alone,” OpenAI said. “This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves.”
The move comes as concerns have increasingly mounted around the abuse of AI tools, particularly image generation functions, to produce explicit images of women and children, including celebrities and public figures.
Earlier in January, the United Kingdom’s communications watchdog, Ofcom, was compelled to make urgent contact with X and xAI “to understand what steps they have taken to comply with their legal duties to protect users in the U.K,” after reports emerged that Elon Musk’s AI chatbot Grok was being used to create and disseminate explicit images of children and women with their clothes digitally removed.
Similar concerns were voiced in the European Union, with Italy’s data protection authority warning users and providers of AI tools over the risks to “fundamental rights and freedoms” posed by AI deepfakes.
To address such concerns, OpenAI said it had three key priorities: modernizing laws to combat AI-generated and altered child sexual abuse material, improving provider reporting and coordination to support more effective investigations, and building safety-by-design measures directly into AI systems to prevent and detect misuse.
“Together, these steps enable the industry to address child safety earlier and more effectively,” said the company in its April 8 statement. “By interrupting exploitation attempts sooner, improving the quality of signals sent to law enforcement, and strengthening accountability across the ecosystem, this framework aims to prevent harm before it happens and help ensure faster protection for children when risks emerge.”OpenAI said it is also committed to continuing to strengthen safeguards to prevent misuse of its systems, as well as working closely with partners like the NCMEC and law enforcement to improve detection and reporting.
In a joint statement on the child safety blueprint, State Attorneys General Jeff Jackson (of North Carolina) and Derek Brown (of Utah), Co-Chairs of the AI Task Force of the Attorney General Alliance, said they “welcome this blueprint as a meaningful step toward aligning the technology sector’s child safety practices with the enforcement realities our offices confront every day.”
The pair particularly highlighted the framework’s recognition that effective generative AI (GenAI) safeguards require layered defenses—not a single technical control, but a combination of detection, refusal mechanisms, human oversight, and continuous adaptation to emerging misuse trends.
“This mirrors what we see in practice: the threat evolves constantly, and static solutions are insufficient,” the state attorney generals said.
OpenAI’s announcement was also welcomed by Michelle DeLaune, President and CEO of the NCMEC, who warned that GenAI was “accelerating the crime of online child sexual exploitation in deeply troubling ways — lowering barriers, increasing scale, and enabling new forms of harm.”
DeLaune added that she was “encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start.”
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Demonstrating the potential of blockchain’s fusion with AI




