Let's start with the math that doesn't add up. OpenAI just paid $6.5 billion for a one-year-old startup with 55 employees—that's roughly $118 million per person. For a company that has produced exactly zero commercial products. Even by Silicon Valley's inflated standards, this deal screams desperation disguised as innovation.
But the real story isn't the sticker price—it's what OpenAI plans to do with all that beautiful design wrapped around their data collection apparatus. According to internal chats leaked by The Washington Post, Altman has set a target of shipping 100 million AI "companions" that will be "entirely aware of a user's surroundings, and even their 'life.'"
One hundred million surveillance devices, beautifully designed.
Before we get swept away by Jony Ive's design genius, let's remember who we're dealing with. Sam Altman isn't just OpenAI's CEO—he's also the co-founder of Worldcoin, the dystopian biometric identification project that's been banned, fined, or investigated across multiple continents.
The company claims it has already scanned the eyeballs of more than 6.5 million people across almost 40 countries in exchange for cryptocurrency, despite facing legal action in Hong Kong, bans in Brazil and Portugal, and ongoing investigations in Germany where Bavarian authorities are deciding whether to bar it from operating in Europe entirely.
Privacy advocates have called Worldcoin's offering of crypto tokens in exchange for scans an "outlandish bribe," particularly targeting lower-income communities in the first place, instead of crypto enthusiasts or communities. As one targeted individual remarked: "why did Worldcoin target lower-income communities in the first place, instead of crypto enthusiasts or communities?"
Here's the cognitive dissonance that should worry every marketing leader: OpenAI was just fined €15 million by Italian authorities for using personal data to train ChatGPT "without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users". The company also failed to implement adequate age verification systems to prevent children under 13 from being exposed to inappropriate content.
Yet this is the same company now promising AI devices that will be "entirely aware of a user's surroundings, and even their 'life.'" The same Sam Altman who has been a strong advocate for the rapid commercialization and deployment of AI technologies, often prioritising growth and market penetration over safety measures.
OpenAI's recent moves paint a disturbing picture of data acquisition at scale. The company has signed multiple partnerships with media companies including Time magazine, the Financial Times, Axel Springer, Le Monde, Prisa Media, and most recently Condé Nast, granting OpenAI access to large amounts of content and potentially user behavior and interaction metrics such as reading habits, preferences, and engagement patterns.
They've also invested in a webcam startup called Opal to enhance cameras with advanced AI capabilities, where video footage collected by AI-powered webcams could translate to more sensitive biometric data, such as facial expressions and inferred psychological states.
Add to this their partnership with Thrive Global to launch Thrive AI Health, which will use AI to "hyper-personalise and scale behaviour change" in health, despite unclear privacy and security guardrails, and you start to see the data empire Altman is building.
Here's where Jony Ive's involvement becomes genuinely concerning. Ive's design philosophy has always been about making technology disappear, creating seamless experiences that feel magical rather than intrusive. It's exactly what you'd want if you were building the ultimate surveillance device.
The leaked details suggest a wearable device "slightly larger" than the Humane AI Pin but "as compact and elegant as an iPod Shuffle" with no screen, worn around the neck like a necklace. It would use microphones for voice and cameras to see what's happening around you, chatting with your phone and computer to create a comprehensive picture of your life.
Altman wasn't shy about the implications: "We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those." Not "sell you"—"mail you." Like a free smartphone from your carrier, but with far more invasive capabilities.
For marketing leaders, this should be a wake-up call about the future of consumer privacy. We're watching the creation of what could become the most sophisticated consumer surveillance apparatus ever created, wrapped in beautiful design and marketed as "AI companions."
By gaining access to vast amounts of user data, OpenAI is positioning itself to build the next wave of AI models – but privacy may be a casualty. The risks are multifaceted. Large collections of personal data are vulnerable to breaches and misuse, and the potential for large-scale data consolidation raises concerns about profiling and surveillance.
Altman claims his prototype device will be "the coolest piece of technology that the world will have ever seen." But coolness isn't the issue—control is. We're watching the marriage of exceptional design talent with a company that has repeatedly shown it prioritizes data collection over user privacy, growth over safety, and expansion over ethical responsibility.
The real question isn't whether these devices will be beautiful—with Jony Ive involved, they certainly will be. The question is whether we'll realize what we've given up by the time we're wearing them around our necks.
Ready to implement AI strategies that respect user privacy while driving real growth? Winsome Marketing's experts help companies harness AI's power without compromising their values—or their customers' trust. Let's build the future responsibly.