The web is moving beyond static templates. What if your UI could generate itself on-demand, adapting to user context, data complexity, and AI-driven insights? That’s the promise of Generative UI — a pattern where interfaces are dynamically created by AI models and delivered through React Server Components.
What is Generative UI?
Generative UI is a design pattern where user interfaces are created programmatically in response to user requests, often powered by large language models (LLMs). Instead of pre-defining every possible UI state, you let AI reason about what interface elements are needed and generate appropriate React components on the fly.
Think of it as the UI equivalent of text generation: just as GPT can write an essay, generative UI systems can “write” interfaces — choosing between charts, tables, forms, or custom visualizations based on the user’s intent and available data.
This approach shines in scenarios where:
- User requests are unpredictable or highly varied
- Data structures change frequently
- You want interfaces that explain themselves
- Traditional routing becomes unwieldy
The Technical Foundation
At its core, generative UI leverages React Server Components (RSCs) and streaming. Here’s a minimal example using Vercel’s AI SDK:
⚠️ Experimental Status Warning The Vercel AI SDK’s React Server Components integration (
@ai-sdk/rsc) is currently experimental. APIs may change, and it’s not recommended for production use without careful evaluation. React Server Components themselves are still evolving in the React ecosystem.
import { createStreamableUI } from '@ai-sdk/rsc';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function generateDashboard(userQuery: string) {
const ui = createStreamableUI();
try {
// Show loading state immediately
ui.update(<div>Analyzing your request...</div>);
// Let AI decide what to render
const { text } = await generateText({
model: openai('gpt-4'),
prompt: `User asks: "${userQuery}". Return JSON describing what UI to show.`,
});
let config;
try {
config = JSON.parse(text);
} catch (parseError) {
ui.done(<ErrorMessage message="Invalid response format from AI" />);
return ui.value;
}
// Render the appropriate component
if (config.type === 'chart') {
ui.done(<BarChart data={config.data} />);
} else if (config.type === 'table') {
ui.done(<DataTable columns={config.columns} rows={config.rows} />);
} else {
ui.done(<ErrorMessage message={`Unknown component type: ${config.type}`} />);
}
} catch (error) {
console.error('Dashboard generation failed:', error);
ui.done(<ErrorMessage message="Failed to generate dashboard. Please try again." />);
}
return ui.value;
}
The magic is in createStreamableUI(). It allows you to update the UI progressively — showing intermediate states (like loading indicators) before returning the final component. This creates a smooth, streaming experience where users see something instantly while the AI works.
Understanding createStreamableUI
Unlike generator functions (which use function* and yield), createStreamableUI uses an imperative API with .update() and .done() methods. This is different from streaming text generation where you might use async generators. The UI streaming model is push-based: you push updates when you have them, rather than yielding values from a generator.
Real-World Use Cases
AI-Powered Dashboards
Instead of building separate pages for every report type, let AI generate the right visualization. Consider a business intelligence platform where users ask questions in natural language:
// User: "Show me revenue trends for Q1"
// → AI generates: <LineChart data={q1Revenue} />
// User: "Compare that to last year"
// → AI generates: <ComparisonChart current={q1} previous={lastYear} />
Each query produces a tailored interface without writing explicit routing logic. The AI can choose between line charts, bar graphs, heat maps, or even custom visualizations based on data characteristics. A time-series question gets a line chart; categorical comparisons become bar charts; geographic data renders as maps. The system adapts to the question, not vice versa.
Conversational Commerce
E-commerce sites can generate product comparisons, size guides, or checkout flows dynamically:
import { createStreamableUI } from '@ai-sdk/rsc';
async function handleProductQuery(query: string) {
const ui = createStreamableUI();
try {
if (query.includes('compare')) {
const products = await findSimilarProducts(query);
if (products.length === 0) {
ui.done(<div>No similar products found.</div>);
} else {
ui.done(<ComparisonGrid products={products} />);
}
} else if (query.includes('size')) {
ui.done(<SizeGuide product={currentProduct} />);
} else {
ui.done(<div>I'm not sure how to help with that. Try asking about product comparisons or sizing.</div>);
}
} catch (error) {
console.error('Product query failed:', error);
ui.done(<ErrorMessage message="Something went wrong. Please try again." />);
}
return ui.value;
}
Dynamic Forms
Generate forms based on database schemas or business rules:
import { createStreamableUI } from '@ai-sdk/rsc';
async function generateDynamicForm(documentType: string) {
const ui = createStreamableUI();
try {
ui.update(<div>Loading form schema...</div>);
const schema = await fetchFormSchema(documentType);
if (!schema || !schema.fields) {
ui.done(<ErrorMessage message={`No form available for document type: ${documentType}`} />);
return ui.value;
}
const formUI = generateFormFromSchema(schema); // Returns React components
ui.done(formUI);
} catch (error) {
console.error('Form generation failed:', error);
ui.done(<ErrorMessage message="Failed to load form. Please refresh and try again." />);
}
return ui.value;
}
This eliminates the need to hard-code every form variation while maintaining proper error handling.
Implementation Guide
Step 1: Set Up Your Environment
npm install ai @ai-sdk/openai @ai-sdk/react @ai-sdk/rsc
Note:
@ai-sdk/rscis an experimental package. Install it only if you’re comfortable with potential breaking changes.
Configure your server action in Next.js:
// app/actions.ts
'use server';
import { createStreamableUI } from '@ai-sdk/rsc';
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function generateUI(prompt: string) {
const ui = createStreamableUI();
try {
// Initial loading state
ui.update(<Skeleton />);
// AI reasoning happens here
const result = await generateText({
model: openai('gpt-4'),
prompt: prompt,
});
// Parse and validate the response
let parsed;
try {
parsed = JSON.parse(result.text);
} catch (e) {
ui.done(<ErrorMessage message="Failed to parse AI response" />);
return ui.value;
}
// Final component
ui.done(<ResultComponent data={parsed} />);
} catch (error) {
console.error('UI generation error:', error);
ui.done(<ErrorMessage message="An error occurred while generating the UI" />);
}
return ui.value;
}
Step 2: Create a Client Component
// app/page.tsx
'use client';
import { useState } from 'react';
import { generateUI } from './actions';
import { ErrorBoundary } from './components/ErrorBoundary';
export default function Page() {
const [ui, setUI] = useState<React.ReactNode>(null);
const [error, setError] = useState<string | null>(null);
async function handleSubmit(query: string) {
try {
setError(null);
const result = await generateUI(query);
setUI(result);
} catch (err) {
setError(err instanceof Error ? err.message : 'An unexpected error occurred');
}
}
return (
<div>
<input
onChange={(e) => handleSubmit(e.target.value)}
placeholder="Ask for a dashboard..."
/>
{error && <div className="error">{error}</div>}
<ErrorBoundary>
{ui}
</ErrorBoundary>
</div>
);
}
Step 3: Add an Error Boundary
Since generative UI can fail in unexpected ways, always wrap generated components in error boundaries:
// app/components/ErrorBoundary.tsx
'use client';
import { Component, ReactNode } from 'react';
interface Props {
children: ReactNode;
fallback?: ReactNode;
}
interface State {
hasError: boolean;
error?: Error;
}
export class ErrorBoundary extends Component<Props, State> {
constructor(props: Props) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError(error: Error): State {
return { hasError: true, error };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
console.error('ErrorBoundary caught an error:', error, errorInfo);
}
render() {
if (this.state.hasError) {
return this.props.fallback || (
<div className="p-4 border border-red-300 rounded bg-red-50">
<h3 className="text-red-800 font-semibold">Something went wrong</h3>
<p className="text-red-600">
{this.state.error?.message || 'An unexpected error occurred'}
</p>
</div>
);
}
return this.props.children;
}
}
Step 4: Build Component Selection Logic
Create a registry of available UI components with proper error handling:
import { ComponentType } from 'react';
interface ComponentProps {
data?: unknown;
rows?: unknown[];
fields?: unknown[];
[key: string]: unknown;
}
const componentRegistry: Record<string, ComponentType<ComponentProps>> = {
chart: ({ data }) => <Chart data={data} />,
table: ({ rows }) => <Table rows={rows} />,
form: ({ fields }) => <DynamicForm fields={fields} />,
};
function selectComponent(aiResponse: { type: string; props: ComponentProps }) {
const { type, props } = aiResponse;
const Component = componentRegistry[type];
if (!Component) {
console.error(`Unknown component type: ${type}`);
return <ErrorMessage message={`Component type "${type}" is not supported`} />;
}
return <Component {...props} />;
}
Comparison with Traditional Approaches
| Aspect | Traditional UI | Generative UI |
|---|---|---|
| Routes | Pre-defined paths | Dynamic, intent-based |
| Components | Static imports | Runtime selection |
| Flexibility | Requires code changes | Adapts automatically |
| Complexity | Scales linearly with features | Stays relatively flat |
| Performance | Predictable bundle size | Streams on demand |
| Error Handling | Compile-time checks | Runtime validation needed |
| Reliability | High predictability | Dependent on AI behavior |
Traditional approaches work great for known workflows. Generative UI excels when user needs are unpredictable or when you want to reduce maintenance overhead for frequently changing interfaces.
When NOT to Use Generative UI
- High-frequency interactions (e.g., text editors) — latency matters
- Security-critical flows (e.g., payment forms) — explicit control needed
- Simple, stable interfaces — added complexity isn’t justified
- Offline-first apps — requires network for generation (unless using local models)
- Applications requiring strict compliance — AI-generated content may be harder to audit
Best Practices and Error Handling
Defensive Programming
Always assume the AI might return unexpected results:
import { z } from 'zod';
const ComponentSchema = z.object({
type: z.enum(['chart', 'table', 'form']),
props: z.record(z.unknown()),
});
async function safeGenerateUI(query: string) {
const ui = createStreamableUI();
try {
const { text } = await generateText({
model: openai('gpt-4'),
prompt: query,
});
let parsed;
try {
parsed = JSON.parse(text);
} catch {
ui.done(<ErrorMessage message="Invalid JSON response" />);
return ui.value;
}
// Validate against schema
const result = ComponentSchema.safeParse(parsed);
if (!result.success) {
console.error('Schema validation failed:', result.error);
ui.done(<ErrorMessage message="Response did not match expected format" />);
return ui.value;
}
ui.done(selectComponent(result.data));
} catch (error) {
ui.done(<ErrorMessage message="Generation failed" />);
}
return ui.value;
}
Timeout Handling
AI generation can be slow. Consider adding timeouts:
import { createStreamableUI } from '@ai-sdk/rsc';
async function generateUIWithTimeout(prompt: string, timeoutMs = 10000) {
const ui = createStreamableUI();
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error('Generation timed out')), timeoutMs);
});
try {
ui.update(<div>Generating...</div>);
const result = await Promise.race([
generateText({ model: openai('gpt-4'), prompt }),
timeoutPromise,
]);
ui.done(<ResultComponent data={result} />);
} catch (error) {
const message = error instanceof Error && error.message === 'Generation timed out'
? 'Request took too long. Please try a simpler query.'
: 'Failed to generate UI';
ui.done(<ErrorMessage message={message} />);
}
return ui.value;
}
Future Outlook
Generative UI is still emerging, but several trends are clear:
Multi-modal generation: Expect UIs generated from voice commands, sketches, or even eye-tracking data. The next frontier is cross-device consistency — generating a desktop layout and mobile variant from the same intent.
Component marketplaces: As patterns mature, we’ll see libraries of “generative-ready” components optimized for AI selection and composition. Think npm packages tagged with semantic descriptions for LLM discovery.
Local-first generation: With models like Llama 3.1 and Phi-3, we’ll generate UIs client-side without network round-trips. This opens generative patterns to offline apps and privacy-sensitive contexts.
Hybrid approaches: Most production apps will mix static and generative UIs — using traditional routing for critical paths while generating auxiliary interfaces on demand.
Improved reliability: As AI models get better at structured outputs and the Vercel AI SDK stabilizes, error rates will decrease and predictable formatting will become the norm.
Conclusion
Generative UI represents a fundamental shift in how we think about interface design. Instead of anticipating every user need upfront, we give our applications the tools to construct interfaces in real time, guided by AI reasoning and user context.
This doesn’t replace traditional UI development — it complements it. Use generative patterns where flexibility and adaptability matter most, and stick with static components where predictability and performance are paramount.
Remember: The
@ai-sdk/rscpackage is experimental. Monitor the Vercel AI SDK documentation for API changes and production readiness updates.
The tooling is here, the patterns are emerging, and the ecosystem is rapidly maturing. Whether you’re building AI chat interfaces, adaptive dashboards, or conversational commerce experiences, generative UI offers a powerful new approach to meet users where they are — with interfaces that form around their needs rather than forcing them into predefined paths.
Start small: pick one dynamic feature in your app and experiment with generating it. The best way to understand this pattern is to build with it — but be sure to handle errors gracefully and set appropriate expectations with your users.