WordPress's Telex Lets Non-Coders Build Features in Seconds
WordPress just demonstrated real-world applications of Telex, its experimental "vibe-coding" tool that generates functional website components from...
Content creation is one thing. But coding? That's where AI chatbots either prove their worth or completely fall apart.
Because unlike writing, where you can massage mediocre outputs into something usable, code either works or it doesn't. A webpage either renders properly or it's broken. There's no middle ground where you can sprinkle in some editing magic and call it done.
So when marketers and growth teams start leaning on AI for quick webpage builds, landing pages, or HTML email templates, they need to know which tools actually deliver functional code—not just code that looks impressive in a chat window.
We ran the test. Same prompt, four different AI models, zero room for interpretation. Here's what actually happened.
The brief was intentionally minimal to see how much initiative each model would take: "Code a contact webpage for my digital marketing agency website. Attached is the brand colors and a screenshot of the website to give brand style."
No copy provided. No detailed specifications. Just colors, a style reference, and a task. Let's see who can vibe-code a usable contact page with minimal hand-holding.
The contestants: Claude, Gemini, Perplexity, and ChatGPT.
Claude was the only model that provided a live preview of the coded page—already a point in its favor for user experience. But more importantly, it didn't just build a bare-minimum contact form and call it done.
The output included:
Claude understood the assignment wasn't just "build a form"—it was "build a contact page." That contextual understanding meant it filled in the gaps a real webpage would need, even without explicit instructions.
For designers and marketers who need quick mockups or functional prototypes without writing detailed specifications, Claude's ability to infer and execute is genuinely valuable.
Verdict: Best overall for minimal-instruction webpage coding.
Gemini picked up the font styling and created a technically functional form with hover effects. But that's where the ambition stopped.
The page felt skeletal—just a form floating in space with no supporting content, no navigation, no context. Usable in the strictest technical sense, but not something you'd actually deploy without significant additions.
The fonts were questionable (not quite matching the brand reference), and the overall execution lacked the polish or completeness you'd want for even a basic landing page.
Verdict: Works, but requires too much additional work to be practical.
Perplexity did something interesting that most models skipped—it built a full navigation bar at the top of the page without being asked. The only other model to do this was ChatGPT, but Perplexity's execution was cleaner.
The form itself included:
The one oddity was a random element that might have been a rendering issue, but overall, Perplexity demonstrated solid initiative in creating a more complete webpage experience. It understood that a contact page typically exists within a larger site architecture—hence the nav bar.
Verdict: Solid surprise performer for quick webpage builds.
ChatGPT built the nicest navigation bar of the bunch—clean hover features, proper styling, genuinely usable. Then it completely dropped the ball on everything else.
The rest of the page was essentially unusable. No proper form visualization, no content structure, just a mess of code that technically ran but produced nothing you'd actually show to a user.
It's the AI equivalent of building a beautiful front door for a house with no rooms inside.
Verdict: Strong on individual components, fails at holistic page design.
The performance gaps here aren't minor—they're strategic. If you're using ChatGPT for all your coding tasks because it's what you're familiar with, you're potentially missing Claude's superior contextual understanding or Perplexity's surprising competence at webpage structure.
The lesson isn't "Claude is best for coding" (though it performed strongest in this test). The lesson is that tool selection should be task-specific and empirically validated, not based on brand preference or what everyone else is using.
For quick webpage mockups with minimal instructions, Claude wins. For projects requiring detailed navigation components, ChatGPT's nav bar capabilities shine despite its other weaknesses. For balanced performance with some structural thinking, Perplexity punches above expectations.
All four models completed the task quickly—speed wasn't a differentiating factor. What mattered was the usability of the output.
Fast code that doesn't work or requires hours of additional development isn't actually fast. Slightly slower code that's deployment-ready saves more time in the long run.
This is the trap many teams fall into with AI tools: optimizing for generation speed rather than output quality. The result is technically complete deliverables that still require substantial human intervention to become useful.
If your team is building landing pages, email templates, or quick prototypes with AI, you need to know which model performs best for your specific use cases. Not which one performs best in general, but which one delivers usable code for the types of pages you actually build.
Run your own comparison tests. Same prompt, same assets, multiple models. Evaluate based on your actual needs—do you need initiative and gap-filling (Claude), or are you better with models that stick strictly to specifications?
The answers will surprise you. And they'll save you hours of cleanup work when you stop using the wrong tool for the job.
Building AI workflows for your marketing and growth team? Winsome Marketing's AI implementation experts can audit your current tools, run task-specific performance tests, and design multi-model strategies that optimize for actual output quality, not just speed.
WordPress just demonstrated real-world applications of Telex, its experimental "vibe-coding" tool that generates functional website components from...
Every marketing team needs competitor data. Pricing information. Product listings. Review sentiment. Market trends. That data lives on websites that...
AI agents trying to browse the web have a fundamental problem: Websites were designed for humans with eyes, not machines parsing semantic meaning....