A new youth opinion poll conducted ahead of India’s upcoming global AI summit shows strong optimism about artificial intelligence among young users — but also overwhelming concern about AI-generated sexual abuse imagery and rising calls for stricter platform safeguards.
The survey, commissioned by Childlight Global Child Safety Institute and conducted by research firm Norstat, gathered responses from more than 400 internet users in India aged 18 to 24. It found that while most young people see AI as beneficial, they want tighter rules and safety-by-design protections, especially to prevent misuse involving children.
According to the poll, 38% of respondents said AI is mainly a force for good, while only 2% viewed it as mainly harmful. A majority — 58% — said AI brings both benefits and risks. About 68% said AI helps them learn skills and work more efficiently, and 56% said it improves access to information and opportunities.
However, concern was near-universal when it came to AI-generated sexual abuse material involving minors. The survey found that 94% of respondents consider AI-generated explicit images or videos of children — often referred to as deepfakes — to be harmful.
Nearly 89% said internet and social media companies should be required to deploy AI and other technologies to proactively detect and remove harmful content before it spreads.
The findings come as India prepares to host the India–AI Impact Summit 2026 in New Delhi later this month, where policymakers, industry leaders and researchers are expected to discuss responsible AI development and governance frameworks.
Childlight recently reported a 1,325% year-on-year rise in harmful AI-generated online child abuse material globally, highlighting the rapid escalation of the threat as generative AI tools become more widely accessible.
Usage patterns in the poll show heavy daily platform exposure among young users. About 26% said they spend more than four hours a day on social media, messaging or content platforms, while 39% spend between two and four hours. Although 63% described their online experience as generally enjoyable and 46% as helpful, significant shares also called it stressful (29%) or overwhelming (21%). Only 24% described their online experience as usually safe, while 10% said it is usually unsafe.
Gender differences were also visible. Young women were more likely to describe online spaces as unsafe and less likely to express confidence that AI systems are being developed in ways that protect younger users.
Zoe Lambourne, chief operating officer at Childlight, said AI-generated child abuse imagery must be treated as serious harm, not a technical side issue.
She said such material represents “real abuse that violates children’s dignity and can cause lasting harm,” and welcomed India’s focus on child online safety alongside AI innovation.
Indian nonprofit Space2Grow, which works on digital safety and child protection, has been participating in pre-summit consultations with government and research partners. Its CEO Chitra Iyer said India has an opportunity to demonstrate that innovation and accountability can advance together through design standards, regulation and enforcement.
Government-linked expert groups are also signaling a dual-track approach. Gaurav Aggarwal, chairman of the Expert Engagement Group on Child Safety and AI formed by India’s Ministry of Electronics and Information Technology and a volunteer with software think tank iSPIRIT, said the findings reinforce consultation feedback.
“This research validates what we have been hearing — that safety and innovation must go hand in hand,” he said, adding that policy approaches must combine technical controls with legal safeguards and user protection measures.
Participants in the poll also emphasized digital literacy and platform accountability, with several respondents saying AI’s impact depends on how responsibly it is designed, governed and used.
Pre-summit consultations involving Childlight and partner organizations concluded that child safety cannot be left to technology companies alone and requires coordinated action across families, schools, platforms and policymakers, with safety considerations built into AI systems from the design stage.

