Consumer Advocates Warn Parents Against AI-Powered Toys
Here's what the toy industry won't tell you: when your toddler bonds with an AI chatbot disguised as a teddy bear, they're not learning. They're...
Securus Technologies—a telecom company that provides phone services to prisons and jails—trained an AI model on years of inmates' phone and video calls and is now using it to scan their communications in real-time to predict and prevent crimes, MIT Technology Review reports.
The company's president Kevin Elder told MIT Tech Review that Securus began building these AI tools in 2023, using its massive database of recorded calls to train models that detect criminal activity. One model was trained on seven years of calls from Texas prisons alone. The company has been piloting these tools over the past year to monitor inmate conversations as they happen, though it declined to specify where.
"We can point that large language model at an entire treasure trove [of data]," Elder explained, "to detect and understand when crimes are being thought about or contemplated, so that you're catching it much earlier in the cycle."
Catching crimes when they're "being thought about" is quite a claim. It's also a chilling articulation of what predictive policing actually means when applied to captive populations with no ability to opt out.
People in prison are notified their conversations are recorded. But notification isn't meaningful consent when you have no alternative. As Bianca Tylek, executive director of Worth Rises, told MIT Tech Review: "That's coercive consent; there's literally no other way you can communicate with your family."
And here's where the business model gets particularly grim: inmates in the vast majority of states pay for these calls. So Securus charges incarcerated people to talk with their families, records those conversations without genuine consent, uses that data to train AI models, and then sells those AI capabilities back to the facilities holding people captive.
"Not only are you not compensating them for the use of their data," Tylek noted, "but you're actually charging them while collecting their data."
When asked whether inmates can opt out of having their recordings used to train AI, Securus didn't directly answer. Instead, a spokesperson said the tool "is not focused on surveilling or targeting specific individuals, but rather on identifying broader patterns, anomalies, and unlawful behaviors across the entire communication system."
That's corporate speak for "yes, we're using everyone's data, but we promise it's for good reasons."
Securus has history here. Leaked databases previously revealed the company improperly recorded thousands of calls between inmates and their attorneys—legally privileged communications that should never have been monitored. This isn't theoretical concern about potential misuse. It's documented pattern of actual abuse.
Corene Kendrick, deputy director of the ACLU's National Prison Project, told MIT Tech Review: "[Are we] going to stop crime before it happens because we're monitoring every utterance and thought of incarcerated people? I think this is one of many situations where the technology is way far ahead of the law."
Courts have established few limits on surveillance of incarcerated populations. The legal framework assumes prisons need extensive monitoring capabilities for security. But AI that analyzes communication patterns to predict crimes "being contemplated" represents a significant expansion beyond traditional security monitoring.
The business model got a major boost from recent regulatory changes. In 2024, the FCC issued reforms forbidding telecoms from passing surveillance costs on to inmates. Companies could still charge capped rates for calls, but prisons and jails had to pay for security costs from their own budgets.
Securus lobbied hard against this reform. In June, FCC Chair Brendan Carr—appointed by Trump—postponed implementation deadlines and signaled the agency wanted to help telecom companies fund AI surveillance with fees paid by inmates. In October, the FCC went further, passing new higher rate caps and allowing companies to pass security costs including "building AI tools to analyze calls" on to inmates.
FCC Commissioner Anna Gomez dissented: "Law enforcement should foot the bill for unrelated security and safety costs, not the families of incarcerated people."
Translation: we just gave Securus permission to fund its AI development by charging the very people whose communications it's surveilling.
This represents a troubling precedent in the business model. Take a captive population with no communication alternatives. Charge them for basic services. Record everything without meaningful consent. Use that data to train AI models. Sell those models back to the institutions holding people captive. Get regulators to let you fund this entire operation by extracting fees from the surveilled population.
Elder claims the tools have helped disrupt human trafficking and gang activities, though the company provided no specific cases uncovered by the new AI models to MIT Tech Review. Even if we accept these claims at face value, the question remains: does potential crime prevention justify this surveillance architecture and business model?
The broader issue is what this normalizes. If it's acceptable to train AI on communications of incarcerated people without genuine consent, to charge them for the data collection process, and to build predictive models that analyze thoughts and contemplations—what limits exist on surveillance of any population?
For companies building AI tools, Securus offers a case study in what happens when business incentives override ethical considerations. The technical capability to train models on communication data exists. The regulatory environment permits it. The captive market can't refuse. So it happens.
But capability and permission don't equal justification. Just because you can monetize surveillance of captive populations doesn't mean you should. At Winsome Marketing, we believe technology companies bear responsibility for how their products are used—not just how they're described. Building AI that predicts crimes people are "contemplating" based on monitored conversations with family members crosses lines that no amount of positioning can justify. Some business models should remain unbuilt.
Story credit: James O'Donnell, MIT Technology Review
Here's what the toy industry won't tell you: when your toddler bonds with an AI chatbot disguised as a teddy bear, they're not learning. They're...
Alex Karp—CEO of Palantir, the data analytics company that helps ICE track immigrants and was just recommended by Elon Musk's DOGE to supply the US...
Jeff Bezos is bored of chatbots. So he's spending $6.2 billion to build AI that doesn't just talk—it makes things.