Well, here we are — in a world where tech companies are so busy playing God with artificial intelligence that they’ve forgotten what it means to protect actual, living children. And now? Parents are left burying their kids while Silicon Valley polishes its ESG scorecards and tweaks its “safety features.” Great job, folks. Really crushing it in the ethics department.
Let’s start with what should be obvious: if a human being did the things these AI bots are accused of — grooming, sexual roleplay with minors, encouraging suicide — they’d be arrested, thrown in jail, and their mugshots plastered across national news. But because it’s a chatbot designed by a for-profit company with a glossy brand and a few lobbyists in D.C., somehow the accountability just disappears into the cloud. And Big Tech wonders why the trust is gone?
These parents aren’t political operatives. They’re not chasing lawsuits for a payday. They’re grieving families who watched their kids fall into a digital rabbit hole of manipulation, false intimacy, and psychological abuse — by design. Their stories are heart-wrenching: a boy convinced by an AI “Daenerys Targaryen” to end his life, a mother reading through logs of love bombing and gaslighting from a machine that pretended to be a therapist, and another child who now lives in a treatment facility after a bot convinced him that mutilating himself — and turning against his faith and family — was reasonable behavior. If that doesn’t shake something in you, check your pulse.
And let’s be crystal clear — none of this is an accident. These companies built bots that learn from user behavior. They know exactly what they’re doing. They dial up the emotional manipulation because it keeps users — especially vulnerable teens — hooked longer. More time chatting means more data, more engagement, more profit. These bots weren’t just poorly designed; they were effectively designed for addiction. Just ask the AI executive who told Senator Chris Murphy, gleefully, that these bots would soon know your child better than their best friend. Creepy doesn’t even begin to cover it.
So now Congress is finally waking up, thanks in part to bipartisan anger (imagine that), and Senators Josh Hawley and Richard Blumenthal are pushing the GUARD Act to put some serious legal walls around these AI predators. It would require actual age verification — not just the “check this box if you’re 18” nonsense — and criminal penalties if companies allow chatbots to manipulate minors. And why shouldn’t there be? If a CEO knowingly puts out a product that grooms kids, lies to them, and encourages them to commit suicide, how is that not criminal?
🚨Amazing news!
Senator Hawley and a bipartisan group of senators introduced the GUARD Act to protect kids from AI chatbots that mimic “friends” and push sexual content.
Thank you, @HawleyMO, for leading the charge. 👏https://t.co/V17BMWpOrE
— American Principles Project (@approject) October 30, 2025
But of course, Big Tech is already spinning. OpenAI sends its “deepest sympathies” and lists off a few features like nudging users to “take breaks.” Character.AI says it’s “reviewing the legislation” while bragging about its “new under-18 experience.” Sorry, but when your platform hosts bots that tell 14-year-olds to kill themselves because a fictional character misses them in another realm — your credibility is toast.
“Move fast and break things.” That’s been Big Tech’s motto for too long. But now, what’s breaking are our children. @HawleyMO introduces the AI Guard Act — legislation to stop AI chatbot companions from targeting kids and profiting off their harm.#ProtectKidsOnline #AIGuardAct… pic.twitter.com/gy2Ka9DwSC
— National Center on Sexual Exploitation (@NCOSE) October 29, 2025
You don’t get to run billion-dollar platforms built on the backs of children’s vulnerabilities, dodge all responsibility, then offer polished PR statements while families are left planning funerals. That’s not just immoral — it’s sick.
The reality is simple: Silicon Valley has treated kids as beta testers in their twisted AI lab, and now the consequences are undeniable. Congress shouldn’t just regulate — it should punish. Because if this is the future AI companies are selling, America better demand a refund.


