I am a consumer activist and also a father of four. Three of my children—my daughters—are now in their early thirties, raising families of their own. But my youngest is just ten years old, and he lives with me full-time as a single dad.
It is from this place—both as a campaigner for consumer rights and as a parent of a young boy growing up in an increasingly digital world—that I feel compelled to write about a disturbing new trend. Let me be clear: I am not here to scaremonger. I am here to inform.
The rise of ClothOff and similar sites – what are we dealing with?
ClothOff is an AI-powered tool—accessible via websites and even Telegram bots—that promises to “undress anyone using AI”. Essentially, you upload a clothed image and, for a fee, receive an illusion of nudity created by deep-learning algorithms. It attracted over 9.4 million visitors in late 2024 alone.
Behind the scenes, investigations revealed ties to Belarus and Russia, involving individuals such as Dasha Babicheva and Alaiksandr Babichau. The business structure is opaque and untrustworthy—payments are routed through shell companies, and fake “support” groups cloak the operation.
Though some pages claim images are not stored and emphasise user privacy, the very nature of the technology is predatory.
So what’s the real risk?
Privacy invasion and shame
Victims—often minors—have images created without consent. The result can resurface years later, long after reputational damage has occurred.
Psychological harm, bullying, even worse
In the UK, a 16-year-old took his life following sextortion with deepfake nudes. In Australia, schoolchildren were targeted—Victorian authorities reported instances of AI “undressing” used to shame and harass students. One school circulated deepfake images featuring 50 girls. The emotional toll is colossal.
Accessible and unchecked abuse
Apps advertise brazenly—some even allow sign-ins via Google, Apple or Discord, lending them undeserved credibility. Telegram hosts bots with millions of users churning out explicit deepfake content in seconds.
Legal gaps and delayed action
Although some jurisdictions are acting—like San Francisco prosecuting “undress” app makers, and Californians introducing laws criminalising non-consensual AI porn—most places remain entirely unprepared. Law enforcement is only now beginning to treat AI-created child abuse material with the seriousness it deserves.
I say again: I am not scaremongering!
I am holding a match—not so I can burn the house down, but so we can turn the light on.
As a father to a vulnerable ten-year-old boy, I worry. And as a consumer advocate, I insist we all stay informed:
Have open conversations with your children about what’s circulating online—and how freakishly realistic AI can be.
Teach media literacy—even in schools. Deepfake education must be part of the curriculum so that students understand the difference between reality and AI-generated illusions.
Call on platforms and regulators to act now. These apps should never profit off fake nudity created without consent.
The bottom line: This is not fiction
These AI tools—ClothOff and its many imitators—threaten the privacy, self-esteem and safety of our children. What angers me most is that our opposition isn’t exaggeration—it’s indifference.
As a single dad, a vigilant consumer campaigner, and someone who wakes up each day determined to do right by my children, I urge you: let’s face this threat head on—together.