Generative UI won’t replace you

Generative UI (genUI) will help designers scale the value they bring to their products. It and methods like it will change how designers do their work, but not the why. To be scaled instead of replaced, there are a few requirements…

Beau Ulrey
6 min readSep 18, 2024
Hand rail with blurry street behind it at night with lots of lights. Part of the image is contained within a circle and rotated slightly to break the line of the hand rail.
Connect the dots instead of detailing them.

Disclosure

I’m not an AI expert. There it is. My goal is to dig deeper into aspects of emerging tech that can help designers and product teams work more effectively and create better experiences for their customers. I’m learning by reading (resources at the bottom) and writing. If that’s helpful, great! If not, I totally get it. Also, I promise not to use any images of robots or future-people wearing goggles waving their arms around.

First, some context

“GenUI” is one of those terms that can spark a number of ideas, so I’ll start by defining it from my perspective and sharing some examples. According to NN/g, genUI is “…a user interface that is dynamically generated in real time by artificial intelligence (AI) to provide an experience customized to fit the user’s needs and context.” AI is another concept that probably needs a more detailed definition, but we’ll leave it at a computer taking user input and creating output automatically, usually driven by trained-up and well-fed models.

There are some differences I’d like to draw between genUI and AI. They do a lot of hand holding, but they’re still two different people in my mind. AI takes a prompt and creates a design. This application of AI is meant to “democratize” design, or making it a function that anyone can fill without formal training or mentorship. It’s a quest for a magic bullet for product owners or developers without knowledge of good design to create solid designs without designer involvement. That’s in the realm of generative AI, and it largely falls short of expectations from what we’ve seen in cases like Figma’s initial launch of AI features during Config 2024.

As we dig in, I hope to show how genUI can only be successful and beneficial to the business and customer if the designer involved is highly skilled, especially in the essentials like systems thinking and emerging tech.

From my perspective, genUI means establishing a logical system through deep collaboration between design and engineering to resolve all possible cases in a product experience. Traditional product development means creating high- or at least mid-fi design assets for each and every workflow, screen size, error state and the like to guide the development process. This can help development go more smoothly because fewer questions pop up mid-flight, but it also costs a lot of time to do all that design work up front. Often, problems are solved ad hoc instead of in a logical, repeatable fashion.

Using a genUI approach looks a bit different. Designers and developers collaborate to establish templates and logic for core workflows, and use the same logic to address all scenarios that pop up. Designs aren’t required for every little edge case, and often neither are developers. Individuals are more empowered to solve problems as they arise.

Building and shipping can move more quickly, and the end user experience benefits from new levels of customization. For examples of an experience loading using genUI, check out Google’s “Rich results” (carousels, calculator, stocks, etc.) or Vercel AI SDK’s demonstration of components loading based on user input and data structure. Both are strong examples of the experience adjusting based on the data that’s being displayed. Designers and developers are not defining each and every scenario in pixel-perfection. They are setting up the templates, logic and components to allow models to structure pages just so.

Scaling a designer’s impact through this approach means applying knowledge of best practices to all the things that enable genUI to create that structure and produce experiences without designer or developer involvement. Instead of putting a stamp on a single feature or workflow through the course of a sprint or quarter, the logic created can impact hundreds of scenarios for millions of users. Even if the user base is small, impact gets multiplied.

So what does it take to be part of this new way of product creation?

Requirement 1 : A strong design system

To effectively explore genUI methods, teams need a complete component set with documentation so that automatically generated experiences are taking advantage of repeatable components that are ideally accessible, on-brand and consistent with the rest of the customer experience. Whether homegrown or third-party-made, the parts and language are crucial. They guide designers and developers to set up logic and guide the models themselves. Models need good info, and solid design systems are an excellent source.

If teams need to create new components, templates or patterns on top of the system, having a strong foundation will help them move more quickly. In an ideal world, the things they create can find their way back in to the system as contributions or the community-owned recipe layer of a reusability ecosystem.

Requirement 2 : Systems thinking skills

The designer and developer need to understand how components work and how to solve for unexpected scenarios. Systems thinkers deal in templates and structures instead of mockups, though they probably make mockups too as a communication tool. When creating experiences, they break pages down atomically (or atomically-ish) into components and atoms and think flexibly about line wrapping, translation and image handling. Even accessibility considerations like resizing and screen reader experience is handled based on deeper logic. Conventions and design system guidance are the foundation used to tackle everything.

Requirement 3 : Design + engineering collaboration

I hope this isn’t news to anyone working in the digital product world. You can’t skip design, and designers can’t check out and move on during the build process. My own best experiences have all happened when I built a close relationship with my tech partners and worked on problems together. And my worst experiences resulted from a lack of contact and trust. Rework, missed opportunities, frustrations and missed deadlines so often came from poor collaboration or non-existent communication.

For designers, a great place to start is clarifying intent and logic more clearly. Pixel-perfect designs have their place, but the real-world lines of code often benefit very little from defining 8px instead of 6 between elements. Is there a spacing convention or scale? That’s the structure that would help developers move faster and code in more effective ways. Is there no spacing scale and designers are visually moving elements until they ‘look right’? A tight partnership with developers would help those designers understand their medium better to be more effective in their work.

Together, designers and developers (and really all disciplines involved) can figure out the scenarios that need to be tackled and how they need to flex for the edge cases to set up strong logic that powers the scaling we’re after.

Opinion

My personal view is that generative UI will help designers and developers create more personalized experiences that just make sense for users. Roles will not be going away, but they will be shifting to work in smarter ways to impact more customers. To get set up for this future, designers especially need to go deep in systems thinking and spend more time understanding their medium and their partners in engineering.

--

--

Beau Ulrey

I use empathy and good design to help people reach their goals.