AI Washing, AKA, Stop Calling Your Chatbot AI
Every earnings call sounds like a Silicon Valley fever dream. "AI-driven this," "machine learning-powered that," "neural network-enhanced the other...
Meta's lawsuit against CrushAI is peak corporate theater—a carefully choreographed performance designed to distract from the fundamental truth that AI regulation is essentially non-existent, enforcement is laughably inadequate, and bad actors operate with near-impunity in a system designed for their success. It's like watching the arsonist complain about the fire department's response time.
Let's establish some uncomfortable realities. The 16 nudify websites named in recent lawsuits were visited over 200 million times in the first half of 2024 alone, while advertising for these apps increased by 2,400% on social media platforms this year. Meta knew about this explosion of harmful content—Senator Dick Durbin sent Mark Zuckerberg a letter in February citing research showing at least 8,010 CrushAI-related ads ran on Meta's platforms in just the first two weeks of 2025.
Meta's response? They filed a lawsuit after the horse had already left the barn, stampeded through the village, and set up camp in the next county. This isn't enforcement—it's damage control with a press release.
Regulatory Patchwork: A Feature, Not a Bug
The regulatory backdrop reads like a bad joke. Currently, only 14 states have laws addressing non-consensual sexual deepfakes, and there are no federal laws in the US that prohibit the sharing or creation of deepfake images. The recently passed Take It Down Act focuses narrowly on removal requirements, not prevention or meaningful deterrence.
Meanwhile, most nudify apps have been removed from app stores, but some are still around, and some "only" let users create near-nude images in bikinis or underwear—as if digital sexual harassment becomes acceptable with a few pixels of fabric. The loopholes are so large you could drive a deepfake through them.
Here's the dirty secret: the current system is designed for bad actors to thrive. Consider the math of enforcement versus evasion:
Minnesota's proposed legislation would impose civil penalties up to $500,000 "for each unlawful access, download, or use"—but only if developers can't figure out how to geo-block Minnesota users, which is trivial to circumvent with basic VPN technology.
Meta's lawsuit against CrushAI is textbook virtue signaling. The company that spent years allowing these ads to run—generating revenue from exploitation—now positions itself as a protector of digital dignity. Meta announced it's "sharing signals about these apps with other tech companies" and has "provided more than 3,800 unique URLs to participating tech companies" since March.
Translation: Meta finally started doing the bare minimum of content moderation after facing political pressure, then branded it as innovative industry leadership.
The few prosecutions that exist prove the rule rather than the exception. A child psychiatrist in North Carolina was sentenced to 40 years in prison for using undressing apps on photos of patients—but this was prosecuted under existing child exploitation laws, not AI-specific regulations.
For adults? The legal landscape is a wasteland. Law enforcement often have limited resources for investigation, and working across jurisdictions can be difficult. Victims face impossible choices: absorb the harm or spend thousands on legal fees for uncertain outcomes.
Even international efforts miss the mark. The EU AI Act requires transparency for deepfakes—creators must "clearly state that their content is artificial"—but this assumes good-faith compliance from bad-faith actors. It's like requiring bank robbers to announce their intentions before entering the vault.
The UK Children's Commissioner calls for a total ban on nudification apps, while UK Parliament debates banning the tools versus just the sharing. Meanwhile, the apps continue operating from jurisdictions with zero enforcement.
The uncomfortable truth? Meaningful AI regulation would require fundamental changes to how we approach technology governance:
But this would require tech companies to sacrifice profits for protection—and we all know how that conversation ends.
Meta's CrushAI lawsuit is corporate theater designed to obscure a fundamental reality: AI regulation is non-existent where it matters most. Bad actors can do whatever they want because the system is designed for their success. The costs of evasion are negligible, the barriers to entry are minimal, and the consequences are largely theoretical.
We've created a digital Wild West where the sheriff files paperwork after the bank robbery, then issues press releases about commitment to law and order. Meanwhile, the next wave of AI-powered exploitation tools is already in development, operating with the confidence that regulation will remain perpetually behind the curve.
The CrushAI lawsuit isn't a solution—it's a symptom of a system that prioritizes reactive virtue signaling over proactive protection. And until we're willing to admit that reality, every new "enforcement action" will be nothing more than expensive theater while the real harm continues unchecked.
Tired of AI hype obscuring real marketing strategy? Partner with experts who focus on authentic growth tactics that don't depend on the latest algorithmic trend.
Every earnings call sounds like a Silicon Valley fever dream. "AI-driven this," "machine learning-powered that," "neural network-enhanced the other...
4 min read
Can you imagine? You're an AI researcher working late, testing your company's latest model, when it discovers it's about to be shut down. So it...
3 min read
Stanford lecturer Jehangir Amjad poses a deliciously provocative question to his students: Was the 1969 moon landing a product of artificial...