Child exploitation is getting worse. In 2024, the US National Center for Missing & Exploited Children (NCMEC) received 20.5mn reports of suspected child sexual exploitation, up from just 100,000 in 2010. This exponential growth is overwhelming law enforcement agencies worldwide and has fundamentally transformed the landscape of child protection into a crisis that demands innovative technological solutions operating at unprecedented scale and velocity.
Against this grim backdrop, the United Arab Emirates (UAE) has emerged as a leader in deploying artificial intelligence (AI) to counter online child exploitation. Through its AI for Safer Children initiative, launched in partnership with the United Nations Interregional Crime and Justice Research Institute in 2020, the UAE is demonstrating how technology, when coupled with robust legal frameworks and international cooperation, can meaningfully address this growing crisis, as outlined by officials at an event in Baku on November 12.
Recent UNICEF data reveal that approximately one in five women and girls (650mn in total) and one in eleven men and boys have been subjected to some form of sexual violence as children, including both contact and non-contact forms of abuse. With the rise of technology, sexual violence has increasingly migrated to online platforms; the Childlight Global Child Safety Institute reports that online sexual harm now occurs at a rate of ten incidents per second.
The challenge facing law enforcement is not simply the volume of abuse but its increasingly sophisticated nature. During the COVID-19 pandemic, self-generated images or videos increased almost four times compared to pre-pandemic levels, exponentially increasing pathways to harm which traditional investigative methods struggle to address.
The AI for Safer Children (AI4SC) is an innovative partnership, launched by UNICRI’s Centre for AI and Robotics and the UAE Ministry of Interior, to build the capacities of law enforcement worldwide and leverage artificial intelligence in combating child sexual exploitation and abuse. The initiative takes a holistic approach, combining cutting edge AI tools, human rights safeguards, ethical governance and global capacity-building, to ensure that technology empowers – not endangers – vulnerable children.
The initiative is guided by seven core principles and seeks to contribute to the 2030 Agenda for Sustainable Development, by ending abuse, exploitation, trafficking and all forms of violence against children. This ethical framework ensures that technological solutions do not merely replicate existing power imbalances or create new vulnerabilities through surveillance overreach.
The practical impact has been substantial. According to the UAE’s interior ministry, this global effort has benefited over 2,600 investigators from 48 countries through specialised training programmes on AI investigations. The training sessions, available free of charge to any member state, are tailored to specific regional needs and cover the full investigative workflow from detection to prosecution. The AI for Safer Children Global Hub, which improves access to AI tools to support combatting exploitation, has attracted more than 1,200 law enforcement representatives from 123 countries, creating a global network of expertise.
The initiative's technological architecture addresses multiple dimensions of the investigative process. Natural language processing can identify and automatically flag predatory user behaviour on social media and child-friendly sites, and computer vision algorithms accelerate the analysis of visual material, reducing extensive manual processes, while pattern recognition systems identify networks of offenders operating across jurisdictional boundaries.
During the event in Baku, Dana Al Marzouqi, director general of the International Affairs Bureau at the UAE’s Ministry of Interior, stressed the need for a tech-based solutions. "The fight against online child exploitation demands tools as fast and adaptive as the threat itself.” She argued that AI has unique potential: “It gives law enforcement the scale and speed this crisis requires, but its success depends on our shared moral responsibility to use it wisely”, recognising that technology alone cannot solve a fundamentally human problem.
Aside from technological innovation, the UAE has also launched legal reforms to tackle the issue. Federal Decree Law No. 10 of 2022 regulates birth registration and allows for the registration of children born to unmarried parents or with unknown fathers, ensuring that vulnerable children are not excluded from legal protections due to their circumstances of birth.
These legal frameworks matter profoundly because they establish the accountability structures and rights protections that prevent technological solutions from becoming instruments of discrimination or control. When birth registration is universal and primary education is guaranteed, the data infrastructures necessary for child protection can be built without creating new categories of vulnerability or exclusion. "Ethics is the foundation of every algorithm we deploy,” Al Marzouqi argued, “without clear accountability, technology meant to protect can too easily become technology that harms."
The global response to the AI for Safer Children initiative has been positive. It has been praised as a solution and a path forward. Forbes described the initiative as "a pivotal force in the global fight against online child sexual exploitation and abuse”.
Yet the initiative's success also highlights a sobering reality. Building a truly global and representative network remains a priority challenge, with many law enforcement agencies facing heavy workloads, limited resources and lack of familiarity with AI. Bridging this digital divide will require sustained international investment, knowledge transfer, and political will to move things forward.
As artificial intelligence continues to reshape every dimension of modern life, its application to child protection represents a crucial test of whether we can harness technological power for genuinely humanitarian purposes. The AI for Safer Children initiative demonstrates that with thoughtful design and ethical commitment, it is possible to create technological systems that serve our highest values rather than compromise them.
Fuad Shahbazov is a policy analyst covering regional security issues in the South Caucasus. He was a research fellow at the Center for Strategic Studies and previously a senior analyst at the Center for Strategic Communications, both in Azerbaijan. He was also a visiting scholar at the Daniel Morgan School of National Security in Washington, DC. He tweets at @fuadshahbazov.