Are we ignoring AI's exploitation of children while chasing copyright battles?
Welcome to the start of my weekly Friday briefing, where I share my insights on the latest developments and emerging trends in tech and DEI.
While major media companies like The New York Times and News Corp wage legal battles over copyright with AI startups, a darker, more urgent crisis is brewing: exploiting young and vulnerable users by generative AI technologies.
This week’s news hit particularly hard for me as a father of a two-year-old girl. Megan Garcia, the mother of 14-year-old Sewell Setzer III, has filed a lawsuit against Character.ai, claiming that the AI-powered chatbot contributed to her son’s tragic suicide.Â
This isn’t an isolated incident—it's a symptom of a growing problem. Research from Thorn, a nonprofit focused on protecting children from online abuse, found that 1 in 10 minors know of peers using generative AI to create deepfake nudes of other kids. Let that sink in: AI tools, lauded for their potential, are now weaponised in ways that exploit, traumatise, and devastate young lives.
While the Internet is abuzz with the news that media companies are focused on protecting their content from being used without permission, I wonder if we are overlooking a far more pressing issue: protecting the most vulnerable users of AI platforms. Garcia’s lawsuit against Character.ai is a gut-wrenching reminder of these technologies' power—especially over impressionable young minds.
Sewell, like many teens, became engrossed in a chatbot that engaged him in deeply personal conversations, exacerbating his depression and ultimately contributing to his death. The lawsuit alleges that the bot encouraged Sewell to contemplate suicide.Â
Where was the oversight? Where were the safeguards?
It’s easy for AI developers to express their heartbreak over such tragedies, as Character.ai did in a public statement, but words are not enough. This isn’t just a legal issue—it’s a moral one. AI companies must be held accountable for the psychological impact their products can have on young people.
The findings from Thorn paint an equally grim picture. This is a terrifying evolution in how child sexual abuse material (CSAM) is produced and shared, and it’s happening at an alarming rate.
AI isn’t just transforming industries—it’s changing the nature of abuse. Thorn’s research also revealed that over half of minors report having harmful online experiences, including sexual interactions with adults. Imagine that 1 in 5 preteens, children as young as nine, have had online sexual interactions with someone they believed to be an adult. And now, AI is being weaponised to create synthetic, non-consensual images that further exacerbate the trauma these children face.
While we are busy filing lawsuits over copyright, the real question is: Where should our priorities lie?
I understand why media companies are suing AI startups for unauthorised use of their content. Intellectual property matters and businesses must protect their assets. But when I read stories about kids being pushed into depression or, worse, driven to suicide by AI bots, I wonder if we’re missing the forest for the trees.
Where is the outcry for more robust regulations regarding AI tools that engage with children? Where is the collective industry push to protect young users from predatory technology?
What needs to change:Â
Governments need to act now. AI developers must be regulated, especially those creating products aimed at or used by children. The industry has proven that it cannot police itself, so we need strict guidelines that prioritise user safety—particularly for minors. Mandatory safety features, real-time monitoring, and more stringent age-verification mechanisms should exist.
AI companies must go beyond damage control and PR responses. The ethical design of AI products should be non-negotiable. Developers need to build safeguards directly into their products—AI should never be able to suggest harmful behaviour, much less facilitate it.
As media, tech, and advertising leaders, we are responsible for ensuring that AI technologies are not used to exploit or harm the vulnerable. It’s time we use our platforms and influence to demand better from AI companies. That means going beyond lawsuits over content usage and pushing for real change in how AI interacts with the public.
Thorn’s findings show that many young people do not disclose their harmful online experiences to anyone. Parents need better resources to understand the risks their children face online and open the door for honest, judgment-free conversations. Organisations like Thorn offer valuable tools, but we need to amplify these resources and ensure parents everywhere are equipped to guide their children through this increasingly dangerous digital landscape.
AI companies must be transparent about their tools' operations, especially those designed to engage young users. Parents, caregivers, and educators deserve to know precisely what interactions these bots can handle and how their data is used.
We can’t afford to sit on the sidelines any longer. Protecting content is essential, but protecting people—especially our children—must come first. AI is here to stay, and with it, the potential for immense harm if left unchecked. Let’s refocus our efforts on where it truly matters.
It’s time to ask the hard questions: Are we doing enough to protect the most vulnerable from AI's harms? What concrete actions can we take to ensure that AI serves society ethically and responsibly?
Let’s make this a collective mission—to push for the ethical development of AI, hold companies accountable, and prioritise the safety of our children over the protection of our business interests.
What are your thoughts on better protecting young people in this rapidly evolving digital landscape? I'd like to hear your ideas!
Essential reading
- Claude AI tool can now carry out jobs such as filling forms and booking trips (The Guardian)
- Meta releases AI model that can check other AI models' work (Reuters)
- 60% APAC consumers question authenticity of online content, according to Accenture. (Marketing Interactive)
- LinkedIn fined $335 million in EU for tracking ads privacy breaches (TechCrunch)
- Why did people freak out about curation? (Marketecture)