Sometimes you read a tech review so breathlessly enthusiastic that you wonder if the author forgot they weren't writing OpenAI's quarterly earnings call. CNET's ChatGPT Plus review, which awards a stunning 9/10 score to a chatbot that literally sent the reviewer to buy the wrong computer processor, reads less like journalism and more like a love letter written during the honeymoon phase of a relationship destined for disaster.
Let's start with the elephant in the room: CNET's parent company, Ziff Davis, is currently suing OpenAI for copyright infringement. Yet somehow, this review manages to sound like it was ghostwritten by Sam Altman's PR team. That's either impressive compartmentalization or a masterclass in cognitive dissonance.
The "It's Fine When I Do It" Paradox
Our intrepid reviewer opens with a fascinating disclaimer: "As a journalist at CNET, I would never use ChatGPT itself to write for me." Then proceeds to spend 2,000 words explaining how ChatGPT has "completely changed how I do my job" by doing... his research. Which is, checks notes, literally the foundational work of journalism.
This is like saying "I would never use a ghostwriter" while having someone else write all your interview questions, gather your sources, and compile your background research. The distinction between "writing" and "research" here is doing more heavy lifting than a strongman competition.
The reviewer gushes about ChatGPT's ability to find "sources and other bits of information that would have taken hours of Google searching." But here's the uncomfortable question: if a journalist can't efficiently research topics in their field without AI assistance, what exactly are we paying them for? Entertainment value?
The review celebrates ChatGPT's rapid evolution, but glosses over the terrifying implications of this breakneck development pace. We're told about GPT-4.5 Research Preview, o1, o3, and a rotating cast of models being pushed to market faster than a startup burning through Series A funding.
This isn't iterative improvement—it's throwing spaghetti at the wall and seeing what sticks. The reviewer admits that GPT-4.5's launch "was a bit lackluster, with fans on Reddit complaining that it wasn't too dissimilar from 4o." Translation: OpenAI released a barely-different model and called it revolutionary. But hey, at least the version numbers keep getting bigger!
The breakneck pace of releases suggests a company more concerned with maintaining hype cycles than ensuring their products actually work. Remember when software companies used to test things before release? Those were simpler times.
The most unintentionally hilarious section involves ChatGPT's shopping capabilities. Our reviewer asks for CPU recommendations and ChatGPT suggests the AMD Ryzen 7 5800X3D. Great choice! Except it's discontinued and only available as expensive used stock.
When asked where to buy it, ChatGPT confidently provides a dead link to Newegg claiming it's "on sale for $199." Plot twist: the link actually leads to a completely different processor—the 5800X (without the crucial "3D" designation). ChatGPT literally tried to trick someone into buying the wrong component.
The reviewer's response to this objectively terrible performance? "Overall, ChatGPT is still an incredibly powerful shopping tool." This is like saying the Titanic was still an incredibly powerful ocean liner after noting that small iceberg incident.
The privacy section reads like a disclaimer written by lawyers who've given up on life. Don't upload sensitive information, but also OpenAI will collect your name, date of birth, IP address, web browser, and device information. You can opt out of model training, but they'll still gather "some of your data."
The reviewer presents this as reassuring guidance rather than a massive red flag. It's the digital equivalent of "Don't worry, we only spy on you a little bit." The casual acceptance of this surveillance apparatus is stunning.
The scoring system here defies explanation. ChatGPT Plus gets a 9/10 despite:
This is either the most generous grading curve in history or evidence that our standards for AI have fallen so low that "sometimes works as intended" deserves an A-minus.
The most concerning aspect isn't ChatGPT's limitations—it's a tech journalist's complete dependency on it. The reviewer describes using AI for everything from fashion advice to component research to document analysis. This isn't augmented intelligence; it's outsourced thinking.
When journalists become ChatGPT power users rather than independent researchers, we don't get better journalism—we get AI-mediated content that's one hallucination away from misinformation. The reviewer's excitement about automating "the chores of parsing through troves of information" misses the point entirely: that parsing is literally the job.
Here's what $20/month actually gets you: access to a sophisticated autocomplete system that's really good at sounding confident while being wrong. It's like hiring a research assistant who went to Harvard but keeps lying on their resume.
The reviewer celebrates ChatGPT's ability to make "keyword searching in Google a quickly antiquated chore." But Google search, for all its flaws, at least shows you where information comes from. ChatGPT gives you confident-sounding answers with the source credibility of "trust me, bro."
The funniest part? This review perfectly demonstrates why we need better AI skepticism in tech journalism. Instead of critical analysis, we get cheerleading. Instead of examining the broader implications of AI dependency, we get lifestyle content about chatbot fashion advice.
CNET's 9/10 rating isn't a review—it's a testimonial. And testimonials, unlike actual journalism, don't require fact-checking.
Want honest AI analysis instead of marketing fluff? Winsome Marketing's growth experts provide reality-based assessments of AI tools and their actual business impact. We fact-check our own work—revolutionary concept, we know.