Google's Gemini CLI: The Terminal's Revenge
We need to talk about Google's Gemini CLI launch, because frankly, it's about damn time someone remembered that developers don't live in chat windows...
Google announced this week that Stitch—its experimental AI-powered design tool—now runs on Gemini 3, promising "higher quality UI generation" for bringing app ideas to life. The update also introduces Prototypes, a feature that lets users connect multiple screens on the canvas to create interactive flows rather than just static mockups.
The tool is available now at stitch.withgoogle.com. Google emphasizes it's experimental and has "a lot of improvements coming." That framing matters, because Stitch is entering a crowded market where Figma, Sketch, and Adobe XD have established workflows, plugin ecosystems, and years of iteration on what designers actually need.
The question isn't whether Gemini 3 makes Stitch better. It probably does. The question is whether "better" is sufficient when competing tools aren't standing still.
Stitch uses AI to generate user interface designs from text prompts. You describe what you want—"a mobile checkout flow with Apple Pay integration" or "a dashboard showing real-time analytics"—and the model produces screens. Gemini 3's integration theoretically improves output quality: more accurate layouts, better visual hierarchy, components that actually look like production-ready UI rather than placeholder mockups.
That's useful if you're in early-stage ideation and need to visualize concepts quickly without manual design work. It's less useful if you need pixel-perfect specs, custom component libraries, or designs that align with an existing brand system. AI-generated UI tends to be generic by default—it synthesizes patterns from training data, which means designs look familiar but rarely distinctive.
Google doesn't provide examples comparing Gemini 2.5 output versus Gemini 3 output in Stitch, so we're taking their word that quality has improved. Presumably fewer broken layouts, more coherent typography, better spacing. Whether that translates to designs you'd actually ship is a different question.
The new Prototypes feature addresses a real limitation of static screen generation. Designers don't just create individual screens—they design flows, interactions, and state transitions. Being able to connect screens and define navigation creates a more complete picture of how an app actually functions.
This isn't novel. Figma has had prototyping features for years. So have Sketch, Adobe XD, and dedicated prototyping tools like Principle and ProtoPie. What's potentially interesting about Stitch's version is whether AI can assist with interaction design—suggesting transitions, generating multiple states, or proposing navigation patterns based on best practices.
Google doesn't specify whether the Prototypes feature includes AI assistance or if it's purely manual screen linking. If it's manual, Stitch is catching up to baseline functionality that competitors have offered for years. If it's AI-assisted, that could differentiate the tool—assuming the AI suggestions are actually useful and not just pattern-matching from common design systems.
Google calls Stitch "experimental" and acknowledges "a lot of improvements coming." That's honest, but it's also a liability. Designers need tools they can depend on for client work, product launches, and collaboration with engineering teams. Experimental means unstable APIs, frequent breaking changes, and no guarantee the tool will exist in six months.
Google has a track record of launching experimental products and then discontinuing them when they don't achieve sufficient traction or strategic fit. Stitch might be different—it's clearly getting investment given the Gemini 3 integration. But designers remember Google Reader, Google+, Inbox, and dozens of other products that started as experiments and ended as deprecation notices.
That institutional memory makes adoption risky. Why invest time learning Stitch's workflows, building component libraries, or migrating existing projects if Google might pull the plug? Especially when established alternatives don't carry that risk.
There are scenarios where AI-powered design generation makes sense. Rapid prototyping for user research, where you need multiple variations quickly and visual polish matters less than testing interaction patterns. Internal tools and admin interfaces, where speed matters more than brand differentiation. Early-stage startups that need something functional before hiring designers.
Stitch could also be valuable for non-designers who need to communicate ideas visually—product managers, engineers, founders. If Gemini 3 makes UI generation accessible enough that people without design skills can produce usable mockups, that's legitimately useful. The risk is that "usable mockups" become "shipped products" without designer involvement, which is how we get poorly designed software.
AI design tools lower the barrier to creating interfaces. They don't lower the barrier to creating good interfaces. Those are different problems with different solutions.
Google hasn't articulated where Stitch fits in their product strategy or how it competes with established design tools. Is this a Figma alternative? A prototyping supplement? A workflow enhancement for developers who don't work with designers? The "experimental" label gives Google flexibility to pivot, but it also signals uncertainty about the tool's purpose.
Figma dominates collaborative design with network effects, plugin ecosystems, and deep integration with developer handoff workflows. Adobe owns the creative professional market. Sketch has established presence among Mac-based design teams. Stitch entering this market with AI as a differentiator only works if AI capabilities are substantially better than what competitors are also building—and competitors are building AI features too.
Figma announced AI-powered design generation features. Adobe has Firefly integrated across Creative Cloud. The competitive advantage of "we have AI" erodes quickly when everyone has AI.
Stitch will succeed or fail based on whether professional designers adopt it for real work. That requires more than better AI—it requires ecosystem, reliability, feature completeness, and trust that the tool will exist long enough to justify investment.
Google has the resources to build all of this. Whether they have the commitment is less clear. Experimental products get experimental levels of support. If Stitch is a research project exploring AI-assisted design, it might produce interesting insights without becoming a viable product. If it's a serious attempt to compete in the design tool market, it needs to drop the "experimental" label and commit to the long-term infrastructure that market demands.
Right now, Stitch is a capable prototype of what AI design tools could be. Whether it becomes more than that depends on decisions Google hasn't made public yet—and probably hasn't finalized internally either.
If you're evaluating design tools or figuring out where AI actually improves creative workflows, Winsome's team can help you separate capability from hype.
We need to talk about Google's Gemini CLI launch, because frankly, it's about damn time someone remembered that developers don't live in chat windows...
Google dropped a bombshell yesterday: Gemini can now remember your conversations and learn your preferences over time, while simultaneously launching...
Google released another Gemini Live update, and the pattern feels increasingly familiar. Faster responses. More expressive voices. Different accents....