AI in Marketing

Google Built a Browser Called Disco That Generates Custom Apps

Written by Writing Team | Dec 16, 2025 1:00:03 PM

Google announced Disco this week—an experimental browser that takes queries or prompts, opens related tabs, and builds custom apps for whatever you're trying to accomplish. Ask for travel tips and it generates a planner app. Request study help and it creates a flashcard system. The Verge reports that the concept, called GenTabs, started as a hackathon project and caught the Chrome team's imagination enough to launch as a Google Labs experiment.

Parisa Tabriz, who runs Chrome at Google, emphasizes that Disco isn't intended as a general-purpose browser or an attempt to replace Chrome. It's an experiment to see what happens when users move "from just having tabs to creating this very personalized, curated app that helps them do what they need, right now."

That framing—moving from tabs to apps—is doing substantial conceptual work. Let's examine whether it's solving a real problem or creating interface complexity where simplicity already worked.

What GenTabs Actually Does

GenTabs are information-rich pages generated by Gemini AI models. Google's Gemini 3 can create one-off interactive interfaces, essentially building miniature apps on the fly instead of returning text or images. You prompt Disco with what you're trying to accomplish, and it generates a custom interface with the information and tools you theoretically need.

This is technically impressive. It's also conceptually adjacent to features that already exist across multiple platforms: Notion's AI blocks, Claude's Artifacts, ChatGPT's Canvas, and various other attempts to make AI-generated interfaces feel more interactive and structured than chat responses.

The value proposition depends on whether custom-generated apps are more useful than existing alternatives: opening relevant tabs manually, using dedicated tools designed for specific tasks, or simply getting a text-based answer from an AI assistant. For some use cases, generated apps might genuinely help. For others, they're interface overhead between you and the information you actually wanted.

The Use Cases Google Showcased

Travel planning and flashcard generation are the examples Google leads with. These are plausible scenarios where structured interfaces could add value over unstructured search results. A travel planner that aggregates flights, hotels, and activities in a single interactive interface saves you from juggling multiple tabs. A flashcard app that generates study materials from source content accelerates learning workflows.

Both examples assume the AI generates interfaces that are actually useful—not just visually structured but functionally superior to alternatives. Travel planning apps already exist. Flashcard tools already exist. Disco's value depends on whether AI-generated versions are better than purpose-built tools, and whether the overhead of generating custom interfaces is worth the marginal improvement over existing solutions.

The Verge describes this as "Googling meets vibe coding," which is evocative and also slightly concerning. Vibe coding works when you're exploring possibilities and iteration is cheap. It's less useful when you need reliable, repeatable workflows—which is most of what people actually use browsers for.

The Browser That Isn't Really a Browser

Tabriz says Disco can "certainly open and interact with websites," but that's not its primary function. Its job is generating personalized apps. This creates an identity problem: if it's not a general-purpose browser, what is it? A prototyping tool for AI-generated interfaces? A test bed for what Gemini 3 can create? An experiment to see if users want this interaction model?

Google positions it as all three, which is defensible for a Labs experiment but doesn't clarify who should actually use Disco or what workflows it meaningfully improves. Experimental products can explore possibilities without needing immediate market fit. They can also waste user time exploring concepts that sound innovative but lack practical utility.

The question Disco needs to answer: is there a meaningful gap between "searching for information and opening relevant tabs" and "getting exactly the custom interface you need" that justifies the complexity of AI-generated apps? Or is this a solution in search of a problem?

What Google Isn't Saying About Interface Consistency

AI-generated interfaces are inherently inconsistent. Every prompt produces something different. That's both the feature and the limitation. For exploratory tasks where novelty helps, dynamic interfaces are useful. For routine tasks where muscle memory and predictability matter, dynamic interfaces create cognitive overhead.

Users learn where buttons are, how navigation works, and what information lives where. Custom-generated apps reset that learning every time. Google hasn't addressed how Disco handles this tension—whether generated interfaces follow consistent patterns, whether users can save and reuse layouts, or whether every interaction starts from scratch.

Interface consistency exists for reasons beyond aesthetics. It reduces cognitive load and enables efficient task completion. Throwing that away for personalization might be worthwhile in specific contexts. It's not obviously better for the majority of browser-based workflows.

The Hackathon Project That Became a Product

Disco started as a hackathon project that "caught the team's imagination." That's how innovative features often begin—engineers exploring ideas without the constraints of product roadmaps. It's also how feature bloat happens: engineers building things that are technically interesting but don't solve user problems at scale.

Google Labs is the appropriate venue for this kind of exploration. Users who opt into experiments understand they're testing unfinished concepts. What's less clear is Google's criteria for graduating experiments to production products. Does Disco need to demonstrate significant adoption? Solve problems existing tools can't handle? Or just prove that the technology works, regardless of whether users want it?

The honest answer: we won't know until Google sees actual usage data and decides whether to invest further or shut it down. That's exactly what experiments are for. But users considering whether to try Disco should understand they're helping Google answer those questions, not getting a reliable tool they can depend on long-term.

GenTabs and the Broader AI Interface Question

Disco is part of a larger industry conversation about how AI should change interfaces. Should AI assistants return text, or should they generate custom interfaces? Should tools be purpose-built and consistent, or dynamic and personalized? Should users learn applications, or should applications adapt to users?

There's no settled answer yet. Different companies are exploring different approaches. Google is betting that AI-generated apps have utility beyond novelty. Anthropic built Artifacts for similar use cases. OpenAI is exploring Canvas. Every frontier lab is testing variations on the same theme: can AI create interfaces that are better than static tools?

The challenge is that static tools benefit from years of design iteration, established workflows, and ecosystem effects. Dynamic AI-generated interfaces start from zero every time. They might win on personalization. They lose on consistency, reliability, and accumulated feature development. Which matters more depends entirely on the use case.

Who Should Actually Try Disco

Disco makes sense if you're curious about AI-generated interfaces, willing to tolerate experimental instability, and interested in exploring whether custom apps are more useful than traditional search for your workflows. It makes less sense if you need reliable tools for production work, prefer consistent interfaces, or don't see meaningful gaps in existing solutions.

Google Labs experiments are free to try and easy to ignore if they don't resonate. Disco won't replace Chrome or fundamentally change how most people browse the web. It might identify use cases where AI-generated apps genuinely improve on alternatives—or it might prove that tabs and purpose-built tools were already solving the problem adequately.

The value in experiments like this isn't always the product itself. It's what Google learns about what works, what doesn't, and what users actually want when given the choice between familiar tools and AI-generated alternatives.

If you're evaluating AI interface experiments and need help separating genuine innovation from feature bloat, Winsome's team can walk you through what actually improves workflows versus what just adds complexity.