Abuse as the buisness model
As platforms face trial and AI accelerates harm, the crisis of online exploitation is exposing deeper structural failures.
For two decades, child sexual exploitation online has largely been framed as a moderation problem. But the renewed focus on the Epstein files has reopened a more uncomfortable question: what if this is not a failure of enforcement but a feature of the system?
The files exposed how closely the broligarchy - the billionaires who control the tech platforms shaping public life - intersected in Epstein’s orbit. As our founder Carole Cadwalladr wrote: “Epstein’s world is our world. That’s the darkest revelation of these files. He wasn’t an aberration. He was our culture made flesh.”
That culture - an obsessive, pervasive sexualisation of teenage girls, and to a lesser degree boys - does not exist outside the internet. It is threaded through it.
Our culture eroticises teenagers for profit. Our digital systems identify demand, connect users and amplify what keeps them engaged. Recommender algorithms prioritise what keeps users interacting, rather than assessing the nature of that interaction.
Generative AI has added a new layer of risk, laid bare when Musk’s Grok was used to generate non-consensual sexualised images of women and girls. Meanwhile, tech giant Meta faces trial over allegations it connects predators to children, and governments are scrambling to catch up. How in the hell did we get here?
In the third edition of the Citizens Understand, we zoom out to look at how a profitable market around sexualised youth intersects with engagement-driven platform design, and whether the current crackdown from government goes far enough.
We scrutinise the unchecked power of Big Tech to power movements pushing back. Help us stop them.
The market and the machine
Engagement-driven design doesn’t distinguish between curiosity and harmful fixation. Platforms register what a user lingers on, searches for or clicks, then feed them more and more of it - nudging them towards increasingly extreme material. With “teen” among the most searched terms on mainstream pornography sites, a single search can quickly surface more explicit categories and connected account.
Research shows exposure can precede deliberate intent. A Finnish organisation, Protect Children, posted anonymous questionnaires on the dark web to reach people viewing illegal material across several countries, including the UK. Of the 4,549 individuals who admitted to viewing child sexual abuse material, more than half said they had not been actively seeking abusive images when they were first exposed to them.
At the same time, recorded offending continues to rise. In England and Wales, more than 850 men are arrested each month for online child abuse offences. The Internet Watch Foundation reported that 2025 was its worst year on record for online child sexual abuse material.
This is now playing out in court, as Meta currently faces a jury trial in New Mexico — its second major trial of 2026 over alleged harms to young users.
The state alleges the company knowingly enabled predators to connect with children, pointing to design choices that prioritised engagement over safety, as well as unmoderated groups linked to commercial sex and the buying and selling of child sexual abuse material.
The lawsuit follows a two-year Guardian investigation, published in 2023, that included more than 70 sources from survivors and their relatives to prosecutors to content moderators. It found repeated claims that Facebook and Instagram had become “major sales platforms for child trafficking”.
In the same year, a Wall Street Journal investigation found that Instagram’s algorithms actively promoted illicit content and connected accounts dedicated to the buying and selling of child sexual abuse material.
Recommendation to generation
Generative AI had already been used to create child sexual abuse material, but at the beginning of this year, the world caught up.
In January, it emerged that Grok - the chatbot developed by Musk’s company xAI and integrated into X - had rolled out a new image-editing feature. The tool was swiftly used at scale to generate sexualised images of women and children. Analysis conducted for the Guardian found that around 6,000 requests an hour were being made to alter images to show women in bikinis or revealing clothing.
X initially responded by restricting image-editing replies to paying subscribers. Following further backlash, Ofcom opened an investigation into whether UK laws had been breached. X later announced that Grok would no longer be able to edit photos of real people to appear in revealing clothing in jurisdictions where this is illegal.
While restricting the feature was welcomed, some campaigners argued the response did not go far enough. Journalist and campaigner Jess Davies described it as “pathetic”, telling BBC News: “They’re just trying to do as little as possible within the loose legal guidelines that there are.”
Thousands of readers trust us to make sense of how power operates. Join us.
Is the government’s response enough?
This week, Keir Starmer called online misogyny a “national emergency” as he announced a crackdown on AI-generated abuse.
AI chatbots will be brought under the Online Safety Act, exposing companies to heavy fines or even blocking if they fail to prevent harm.
The government has also ordered that deepfake nudes and non-consensual intimate images must be removed from platforms within 48 hours of notification, with penalties of up to 10% of global revenue for companies that fail to act. The aim is to shift the burden away from victims, who currently have to repeatedly report the same image as it resurfaces.
Campaigners welcomed the move. Sophie Lennox of Everyone’s Invited told us: “Being a victim of deepfake abuse should be viewed as being just as traumatic as in-person abuse.
Lennox described the proposed law as “the start of building tech that protects, rather than tech that enables abuse”, and said the “relentless efforts” of survivors had been “crucial” in forcing the issue onto the national agenda.
At the same time, ministers are accelerating proposals that could enable a social media ban for under-16s, framing it as an immediate safeguard against exploitation and harmful content.
But some argue that restricting access risks treating the symptom rather than the cause. If access is removed, companies face less pressure to build safety by design.
Journalist and online safety campaigner Adele Walton told us: “A blanket ban on social media for under 16s risks subjecting young people to a cliff face of harmful content once they turn 16.
“What young people need is the government to actually hold big tech companies to account for the harm they’re perpetuating.”
She argues a long-term solution would be a ban on the addictive business model. “This, in practice, could eradicate the online harms - be it self-harm material, grooming and coercion, eating disorder content, or social media addiction more broadly - that people of all ages are facing right now,” Walton said.
“A wholesale social media ban only risks punishing children for our failure to regulate social media platforms over the past 20 years.”
We’re a reader-supported publication explaining how power operates and organising to challenge it.
✊ How to fight back
👉 Join International Justice Mission’s campaign calling for technology that protects children - and tell your MP that devices should be free from child abuse.
https://ijmuk.eaction.org.uk/protect-now
👉 Educate yourself with Ctrl Alt Reclaim’s report, featuring testimonies from youth activists across Europe calling for safety-by-design standards, fairer platforms and meaningful youth participation in digital policymaking.
https://ctrl-alt-reclaim.org/reports/young-voices-reshaping-the-digital-world
🤓 Learn more
📚 What is the human cost of our digital world? Read Adele Walton’s Logging Off, a sharp examination of how engagement-driven platforms profit from anxiety, insecurity and vulnerability.
Order it from Waterstones.
🎥 Is Regulation the Big (Tech) Fix? From the Citizens archive, join journalist Manasa Narayanan as she speaks to Marietje Schaake about whether existing tech laws are working and whether AI can be meaningfully regulated.
Watch here.
Thank you for reading.
See you next time,
Team Citizens
About The Citizens Understand:
In an era where technology is reshaping democracy faster than laws can keep up and power is increasingly exercised through platforms, the Citizens Understand exists to cut through misinformation and make complex systems legible. If there’s something you’d like to understand, email lillian@the-citizens.com.