3 min read
OpenAI's $6.5B Jony Ive Acquisition
Let's start with the math that doesn't add up. OpenAI just paid $6.5 billion for a one-year-old startup with 55 employees—that's roughly $118...
Here's a delicious irony: Sam Altman, CEO of the company behind ChatGPT—a tool that's supposed to help us navigate information—just got called out for spreading misinformation himself. When Altman claimed on a podcast that Meta was offering "$100 million signing bonuses" to poach OpenAI talent, former OpenAI researcher Lucas Beyer flatly called it "fake news" on X. The kicker? Beyer was one of the three researchers who actually joined Meta, presumably without receiving a nine-figure welcome gift.
This isn't just tech industry gossip—it's a perfect case study in why media literacy has become as essential as digital literacy in 2025. In an era where misinformation spreads six times faster than factual information on social media, and 59% of adults feel unable to identify misinformation online, even AI company CEOs are contributing to the information chaos.
The Anatomy of Digital Deception
Let's dissect what happened here, because it's a masterclass in how misinformation spreads in the digital age. Altman made his claim on his brother's podcast "Uncapped," lending it the credibility of a family conversation. The figure—$100 million—was specific enough to sound authoritative but outrageous enough to go viral. The context—a competitive hiring war in AI—felt plausible given Meta's aggressive recruitment push.
But here's what anyone with basic media literacy should have caught: Meta's own financial filings show that no top executive has been paid $100 million in any of the past three years. The company's median employee compensation was $417,400 last year. Even their highest-paid executive, COO Javier Olivan, received $25.5 million. A $100 million signing bonus would represent four times their highest executive compensation package—for a researcher, not a C-suite executive.
The warning signs were there for anyone trained to spot them. As media literacy experts note, when you see something posted that looks sensational, it's even more important to be skeptical, and exaggerated claims with emotional language are serious red flags.
The data paints a sobering picture of our collective digital gullibility. According to recent research, 38.2% of U.S. news consumers had unknowingly shared fake news or misinformation on social media, while 91% of Americans turn to social media for news. When MIT researchers found that fake news spreads up to 10 times faster than true reporting, we're essentially swimming upstream against algorithmic incentives designed for engagement over accuracy.
Here's the truly alarming part: 59% of adults feel unable to identify misinformation online, yet 54% of Americans believe they are better at identifying misinformation than others. This overconfidence bias—what researchers call the "better than average" effect—means people who are worst at spotting fake news are often most confident in their abilities.
The Altman-Meta story perfectly illustrates this dynamic. How many people read about "$100 million Meta offers" and thought "that sounds crazy enough to be true" rather than "that sounds too crazy to be true"? The difference between those two reactions is media literacy in action.
Boston University research shows that 72% of Americans believe media literacy skills are important for identifying misinformation, but only 3% have actually participated in media literacy courses. There's a massive gap between recognizing the problem and developing the skills to solve it.
The psychological reasons people fall for misinformation are well-documented: confirmation bias makes us more likely to believe information that confirms our existing beliefs, while the availability heuristic makes recent, memorable examples feel more probable than they actually are. In the competitive AI hiring market, a story about massive signing bonuses fits the narrative many people already believe about Big Tech excess.
But media literacy training can help. Research from PNAS shows that digital literacy interventions increase discernment between mainstream and false news, though the effects are modest and decay over time. People who have received news literacy education are more likely to go to trusted news sources (50%) when checking suspicious information, compared to those who haven't (36%).
The good news is that basic verification skills aren't rocket science—they're just rarely taught. Here's what checking the Meta story should have looked like:
Source Verification: Altman made the claim on a family podcast, not in an official company communication or SEC filing. That's a red flag for business claims requiring verification.
Cross-Reference Financial Data: Public companies file detailed executive compensation reports. A quick check of Meta's proxy statements would have revealed the $100 million figure was implausible.
Context Checking: The claim came during a competitive moment between companies, when executives have incentives to make rivals look desperate or wasteful.
Expert Sources: Industry journalists and analysts who cover AI hiring would have been skeptical of such figures—and indeed, no credible tech publication ran the story without verification.
As fact-checking experts note, when an article makes specific numerical claims, checking sources becomes critical. Sometimes official-sounding statements from executives need the same scrutiny as anonymous social media posts.
What makes modern misinformation particularly insidious is how algorithms reward engagement over accuracy. Social media platforms' business model depends on keeping users scrolling, which means shocking, controversial, or emotionally provocative content gets prioritized. USC research showed that 15% of frequent social media news-sharers were behind up to 40% of the fake news circulating on Facebook.
The $100 million Meta story had all the ingredients for viral misinformation: it was shocking (nine-figure signing bonuses!), it fed existing narratives (Big Tech competition!), and it came from a celebrity source (OpenAI's CEO!). The algorithm doesn't care if it's true—it cares if it drives clicks, shares, and comments.
This is why media literacy can't just be about individual consumer behavior. Platforms need to be held accountable for information quality, not just engagement metrics. Australia's recent research shows that 46% of respondents thought social media networks did a bad job at handling misinformation during recent riots, with almost 80% of those over 65 believing platforms should be held responsible for posts inciting harmful behavior.
The solution isn't just teaching people to be more skeptical—it's building systematic verification habits into how we consume information. This means:
Pre-Sharing Verification: Before sharing any claim, especially shocking ones, spend 30 seconds checking if major news outlets have covered it. If CNN, Reuters, or the Wall Street Journal haven't reported on "$100 million Meta signing bonuses," there's probably a reason.
Source Stack Ranking: Develop a hierarchy of source credibility. Family podcasts rank lower than SEC filings. Anonymous social media posts rank lower than verified journalist accounts. Celebrity claims rank lower than expert analysis.
Emotional Circuit Breakers: The more outrageous or emotionally triggering a story feels, the more verification it needs before sharing. If your first reaction is "I can't believe this!" your second reaction should be "let me verify this."
Financial Literacy for Tech Stories: Understanding basic corporate finance helps evaluate business claims. A $100 million signing bonus for a researcher would be unprecedented and would require board approval at any public company.
This isn't just about embarrassing tech executives or correcting the record on hiring practices. Research shows that misinformation doesn't just affect kids—95% of parents are taking action to address misinformation, but 39% of UK parents struggle to identify what is true themselves. When adults can't model good information hygiene, we're raising a generation that's even more vulnerable to manipulation.
Professional fact-checking has become politicized, with Americans less enthusiastic about policing information today than they were two years ago. A February YouGov poll found that U.S. citizens are far more likely to trust information from the Trump administration (44%) than news media (28%). In this environment, individual media literacy skills become even more crucial.
The proliferation of AI-generated content makes the problem worse. By 2025, AI-generated deepfakes, synthetic text, and manipulated media will be indistinguishable from authentic content without technical analysis. The Altman story is quaint by comparison—at least we could verify it through public financial records.
Lucas Beyer's blunt "fake news" response to his former CEO's claim should be a wake-up call. If AI company executives are casually spreading unverified information, and researchers have to publicly fact-check their own leadership, we're all swimming in deeper misinformation waters than we realized.
The $100 million Meta myth is a perfect teaching moment because it's small stakes—nobody died, no elections were swayed, no medical misinformation spread. But it reveals the same verification failures that enable higher-stakes misinformation to flourish.
Media literacy isn't about becoming a professional fact-checker or trusting nothing you read online. It's about developing the habit of verification, understanding how information systems work, and recognizing when your emotional buttons are being pushed by content designed for engagement rather than accuracy.
In 2025, skepticism isn't cynicism—it's digital self-defense. And if you can't trust AI company CEOs to fact-check their own claims about competitor hiring practices, you definitely can't trust random social media posts about anything more important.
Ready to cut through digital noise and build authentic authority in the AI age? Winsome Marketing's growth experts help you create trustworthy content that stands up to scrutiny in an increasingly skeptical digital landscape.
3 min read
Let's start with the math that doesn't add up. OpenAI just paid $6.5 billion for a one-year-old startup with 55 employees—that's roughly $118...
1 min read
We're drowning in a sea of AI anxiety. Every week brings fresh predictions about artificial intelligence obliterating jobs, with forecasts ranging...
1 min read
Here's the statistic that should make every CMO pause: While 19% of consumers now use generative AI tools like ChatGPT or Gemini to find businesses,...