Skip to content

Google Stitch Brings AI to Design-to-Code Workflows

Experimental tool turns prompts into UI and code.

Google Labs logo for Stitch AI design-to-code tool in experimental beta
Image Credit: Google Labs

Google has begun testing Stitch, an AI-powered tool that generates user interfaces and front-end code from prompts or sketches. First announced at Google I/O in May 2025, Stitch is positioned as an experiment in reducing the friction between design and development.

The tool works in two modes: text-to-UI, where users describe layouts in plain language, and sketch-to-UI, where rough drawings are translated into structured designs. Outputs include interface variations, HTML/CSS code, and direct exports to Figma for refinement. Underpinned by Google’s Gemini 2.5 models, Stitch processes both language and visuals to infer hierarchy, layout, and styling.

The practical appeal lies in efficiency. Interface drafts that typically take hours can be produced in minutes, offering teams a way to explore more options during prototyping. Its ability to generate both visuals and code also reduces translation issues at the design–development handoff.

Stitch remains limited in scope. It currently supports only English prompts, generates static rather than interactive designs, and is better suited to single screens than complex, multi-page applications. For now, its role is less a replacement for existing workflows than a supplement—useful for ideation, quick iteration, and early-stage experimentation.

As it develops, Stitch will test whether AI can be integrated into interface design without oversimplifying the process. For design teams, the outcome will be less about automation and more about whether the tool can provide practical value in everyday workflows.

Comments

Latest