Just-in-Time Interfaces
Emerging UX patterns for AI-native products that trade static screens and flows for UI assembled on the fly.
As AI reshapes how software responds, it’s also reshaping how software anticipates. Static screens and rigid flows are giving way to something more dynamic: interfaces that adapt in real time and respond to user needs in the moment.
We’re entering an era where products don’t wait for users to find the right feature—they bring the right feature to the user. Instead of navigating menus and multi-step flows, users engage through just-in-time interactions that respond to context, anticipate needs, and adapt fluidly to each situation. It’s not just about chat. It’s about rethinking how digital experiences show up—how they guide, respond, and collaborate. This is the emerging shape of Just-in-Time UI—and we’re here to discuss why it matters, and what it asks of designers and product teams looking to build more responsive, human-centred software.
The Interface Has Left the Building
For decades, we’ve built digital experiences like we build houses. Rooms, doors, hallways. You enter through the front, poke around, open a drawer, maybe find what you need in the back corner of the second floor. This has worked well enough. People learned the layout. Designers became good at drawing floor plans.
But something strange is happening. AI is dissolving the walls.
There’s a new kind of interface—one that doesn’t live inside rooms or hallways. It has no door. It doesn’t wait for you to walk over. It comes to you.
The old assumptions about how software should behave—how it should be structured, how users should find things, how teams should build and ship features—are starting to break down.
Users don’t always know what they’re looking for. They don’t always start from the same place. Sometimes they just want to ask a question. Sometimes they want a shortcut to a complex task. And sometimes they don’t know what they need until the product gently shows them.
Product teams, meanwhile, carry the weight of complexity: Every new feature means more navigation to maintain, more UI to thread together, more onboarding, more decision trees. Every flow needs to be accounted for, tested, and translated across devices. We’re building houses for every possible use case—and watching the floor plan grow more convoluted with each release.
When UIs Become Probabilistic
AI gives us a different blueprint. Not a rigid structure, but a responsive fabric. Not a static dashboard, but an ambient companion. An interface that meets you where you are, adapts in real time, and brings the right interaction to the surface just when you need it.
Instead of menus and buttons, we get possibilities. Instead of searching through a filing cabinet, we get a librarian whispering in our ear, “You might want to look at this.” The interface becomes ambient. Just-in-time. Context-aware.
It’s no longer about building rooms—it’s about hosting a conversation.
This isn’t just a new tool. It’s a new shape for software. One where flows are replaced with conversation threads, and the architecture bends around the user instead of the other way around. It’s less a dashboard and more a dialogue. Less a map and more a moment.
When the interface can listen, adapt, infer, and respond in real time, we no longer need to navigate the logic of the machine. The machine meets us in our uncertainty. It offers suggestions. It remembers. It asks guiding questions. It proposes next steps.
The interface becomes less a floor plan—and more a conversation.
But conversations are tricky things to design. They require context. They unfold. They shift direction. They carry tone and tempo. And when done well, they don’t just help you complete a task—they help you understand what you needed in the first place.
This essay is about a new kind of UI: from rigid flows to responsive moments. From pre-built screens to modular, dynamic interactions. From users having to navigate to the right place, to the product bringing the right thing to them.
Let’s explore what it takes to design for this new shape of software—where AI lets us worry less about the map, and more about the moment. Lets talk about Just-in-Time UI.
What Is Just-in-Time UI?
Just-in-Time UI is an emerging design paradigm that shifts the burden of navigation away from the user, and instead delivers the right interface at the right moment, in the right context.
Where traditional interfaces rely on static screens, fixed menus, and user-initiated flows, Just-in-Time UI reimagines the interaction model:
Context-aware: It understands the user's intent, circumstances, and stage in a task.
Conversational: It’s fluid and responsive—not just responding to commands, guiding users as much as it listens to them.
Modular: Instead of rigid templates, it assembles UI elements on demand.
Proactive: It surfaces suggestions, actions, or insights before users ask.
Adaptable: It flexes based on stakes, urgency, ambiguity, or expertise.
It’s not just AI as a backend enhancement—it’s AI as the interface itself.
To design Just-in-Time UI, we need to rewire some core assumptions. We trade static journeys for threads that unfold in real time. We prioritise responsiveness over rigidity. And we stop trying to teach users how our interface works—instead, our interface learns how users think.
In the next sections, we’ll explore things like:
The foundational patterns of Just-in-Time UI.
Why this model is so well-suited to AI-powered products.
The design challenges it introduces—and how to overcome them.
The kinds of use cases and interaction models this unlocks.
Where Just-in-Time works well, and where it doesn’t.
How to blend Just-in-Time with more traditional, fixed UI.
But first, let’s step back—and look at the why behind it all.
Why now? Why does this shift matter? And what becomes possible when we let the interface leave the building?
Why Just-in-Time UI, Why Now?
For years, product teams have shipped software within a familiar model: predictable user flows, screen-based navigation, static hierarchies of features. It’s worked because it mapped to something people understand—files in folders, pages in books, buttons in menus. But this model comes at a cost.
The Limits of Traditional UI
As products grow, the interface becomes a maze of logic. Features multiply. Edge cases spawn new screens. Navigation expands to accommodate new tools. Each addition is a new hallway in an ever-growing house—and every hallway needs maintenance, onboarding, UX writing, testing.
Meanwhile, users don’t experience your product as a sitemap. They come with goals, questions, situations. And increasingly, they expect the product to meet them there—not make them dig.
But traditional UI often can’t. It’s rigid. It assumes linearity. It doesn’t deal well with ambiguity or nuance. And it struggles when users:
Don’t know what feature they need.
Aren’t sure how to map their situation to the mental model of the UI or IA.
Are trying to accomplish a task that spans multiple domains.
Want to explore what’s possible before committing to an action.
That’s where Just-in-Time UI shines.
Why Now? Because AI Changes the Interface Layer
The past year has brought a profound shift: AI models have moved from backend infrastructure to interface enablers. They can now interpret, suggest, reason, summarise, and even generate UI elements dynamically.
This means:
You don’t need to build a flow for every edge case—AI can generate what’s needed.
You don’t need to guess at every intent—AI can infer and clarify.
You don’t need to rely on users finding a feature—AI can bring it to them.
It’s a shift from building a fixed toolset to designing a responsive surface—a surface that adapts to the moment, the person, and the context.
But that’s not all. AI unlocks:
Natural language interaction: freeing users from predefined UI paths
Contextual awareness: understanding behaviour, intent, stakes
UI generation: assembling views and responses on the fly
Memory and personalisation: tailoring interactions over time
In other words, we now have the tools to build software that behaves less like a filing cabinet—and more like a helpful teammate.
Just-in-Time UI is how we put those tools to work at the interface level.
This Isn’t Just About Chat
Let’s be clear: this isn’t about turning every product into a chatbot.
Yes, language models and conversational agents have shown us what’s possible when systems can respond naturally and fluently. But the real shift goes deeper than chat. It’s about a new way of thinking about interaction.
The core insight is this:
A good conversation adapts to you. It listens. It responds. It surfaces what’s relevant. It doesn’t force you through a menu of fixed responses—it takes twists and turns, and on the journey it figures out what you’re trying to say and helps you say it better.
This is what makes the Just-in-Time paradigm powerful—not the chat interface, but the conversational principles underneath it:
Responsiveness: The interface adjusts to what you’re trying to do, not the other way around.
Context-awareness: Like a good conversation partner, the system remembers what came before and shapes responses accordingly.
Immediacy: You don’t navigate through layers—you say what you need, and something relevant appears.
Turn-taking: You and the system co-create the interaction step by step. You ask, it answers. It suggests, you refine.
These traits can be expressed through chat, yes—but also through:
Smart, modular UI components that appear when needed
Multimodal interactions in a collaborative canvas
Visual shortcuts that adapt based on context or past behaviour
Suggested actions that feel like intuitive prompts or next steps
Pathways that unfold dynamically in real time, not rigid flows
The endgame isn’t chat everywhere. It’s adaptive interaction everywhere. That’s the real revolution: not just changing how users input commands, but reimagining how systems respond—with awareness, immediacy, and design that flexes to the moment. Conversation is the metaphor—not necessarily the medium.
Core Principles of Just-in-Time UI
Designing Just-in-Time UI isn’t just about layering chat on top of an app. It’s about rethinking how software understands, responds, and adapts to user needs moment by moment. That shift requires a new set of principles—ones that prioritise context, flexibility, and conversation over structure, flow, and hierarchy.
Here are the foundational ideas behind designing Just-in-Time UI:
1. Context Is the Starting Point
In traditional software, users must navigate to a feature before getting help. Just-in-Time UI flips that: it starts with the user’s situation.
That context might include:
The current state of the user’s account or project
Recent activity or behaviour
Environmental signals (time, location, urgency)
Historical interactions
The goal isn’t just to wait for a user to ask a question. It’s to anticipate what might be helpful and offer it proactively. Good design doesn’t make users think—it lets them feel understood.
2. Intent Shapes Interaction
Users arrive with different goals: to explore, fix, understand, complete. The same question—“What’s going on today?”—can carry different meanings depending on intent.
Just-in-Time UI needs to:
Sense intent (explicitly or implicitly)
Respond with the right tone and level of detail
Offer the right next steps (e.g. suggest actions, ask clarifying questions)
Intent becomes the invisible filter shaping how the interface behaves.
3. UI Should Emerge, Not Just Exist
Instead of designing rigid screens, we design modular building blocks—visualisation widgets, control components, summaries, cards—that can be dynamically assembled based on need.
Think of UI as a response, not a destination:
A question about finances might return a chart and a written summary
A task prompt might return a quick form or approval button
A vague query might return a set of contextual suggestions
The system brings the UI to the user—just in time, just enough.
4. Memory and Threaded Interactions
In static UIs, each session is a reset. In Just-in-Time UI, continuity builds trust.
Threads allow:
Multi-step conversations with evolving goals
Shared memory of what’s been asked and answered
Contextual grounding for ambiguous input (“What is this?”)
Whether it’s one long thread or distinct sessions, the interaction model should feel like a relationship, not a reset.
5. Scaffolding Is Still Necessary
We can’t assume users always know what to do, say, or ask.
We need to:
Bring information and actions that are relevant to the current context
Provide suggested jumping-off points or common actions
Use visual shortcuts and CTA cards to invite engagement
Offer anchor screens or contextual “home bases” that explain what’s possible
This scaffolding acts like training wheels—giving users the confidence to explore while teaching them how the system works.
6. Just-in-Time Doesn’t Mean Chat-Everything
The future of UI isn’t “chat replaces everything.” It’s “chat augments what needs augmenting.” There will always be a place for fixed UI—for dashboards, status indicators, persistent controls.
But Just-in-Time UI lets us:
Reduce the burden of building and maintaining rigid flows
Add dynamic, AI-driven layers of interaction that adapt to each user
Balance predictable structure with responsive surfaces
Designers must find the right blend—letting traditional UI and adaptive UI co-exist.
The Challenge of Designing for Adaptability
The promise of Just-in-Time UI is seductive—an interface that meets users exactly where they are, with exactly what they need. But designing for that level of responsiveness introduces new challenges that don’t exist in traditional UX paradigms.
This is no longer about building flows—it’s about designing for ambiguity, uncertainty, and adaptation.
Here are some of the key tensions we need to grapple with:
1. Users Don’t Always Know What to Ask
When the interface is reactive—waiting for the user to express intent—what happens when the user doesn’t know how to start?
This is the “empty page” problem, reimagined in a product context. Instead of a clear, linear path through a menu, users are faced with a blank canvas or an open input field. It offers infinite possibility—but also friction, paralysis, or confusion.
Implication: We can’t rely on users to know the right prompts. We need to design for unspoken needs—surfacing starting points, offering suggestive cues, or making good first moves on the user’s behalf.
2. Feature Discovery Gets Harder
In traditional UIs, features are revealed through navigation—tabs, buttons, menus, and tooltips. Users can browse to discover what's possible.
In Just-in-Time systems, many capabilities are latent. They exist, but only become visible when invoked or needed. This improves focus—but it also risks invisibility.
Implication: We need new approaches to progressive disclosure and onboarding. The interface must show just enough of what’s possible—without overwhelming, and without making users feel like they’re interacting with a black box.
3. Every Interaction Becomes a System Design Problem
In static systems, you can design and QA each flow. You know what steps a user will take. You control the shape of the experience.
In adaptive systems, users can drop into a moment from anywhere—with any intent, level of urgency, or emotional state. The system must respond appropriately, without pre-baked flows to fall back on.
Implication: This isn’t just UX design—it’s behaviour design. We must define how the system behaves across scenarios, stakes, and contexts. It's more like designing a set of characters with personalities, boundaries, and logic than building wireframes. Each of these characters needs to be told how to think and act, and our interfaces need when to refer users to them.
4. Context Is Everything—And It’s Easy to Get Wrong
Just-in-Time UI depends on accurate context: what the user is doing, what they’ve done before, what they care about now. But context is subtle and fragile. Get it wrong, and the system feels irrelevant—or worse, invasive.
Implication: Designers must be deliberate about what signals to use, how to interpret them, and how much to expose. And when the system is guessing, it needs to do so with humility—offering users the ability to correct course, or clarify intent.
5. You Still Need Structure Somewhere
Just-in-Time doesn't mean anything-goes. Users still need predictability, clarity, and anchors in the experience.
Some core actions (like managing account settings, switching teams, or reviewing legal documents) require fixed pathways and reliability. These can't be left to guesswork or inference.
Implication: The challenge isn’t replacing static UI—it’s balancing it with dynamic UI. Figuring out which areas benefit from adaptability, and which require structure and stability.
6. How Do You Handle Multi-Threaded Intent?
If the interface behaves like a conversation, how do you manage multiple conversations?
In apps where users complete multiple jobs (e.g. managing a team’s spend, reviewing invoices, and updating personal info), users may bounce between contexts quickly. A single, linear stream won’t scale.
Implication: We need new mental models for interaction history, threading, and session management—without overwhelming users or forcing them to think like developers managing state.
These aren’t dealbreakers. They’re design opportunities.
They challenge us to invent new interaction patterns, rethink how we surface capabilities, and get more creative about balancing structure with fluidity.
In the next section, we’ll explore some of the emerging patterns that help resolve these tensions—and make Just-in-Time UI usable, discoverable, and deeply human.
Patterns for a Just-in-Time UI
To build experiences that respond fluidly to user needs, we need to move beyond static flows and embrace modular, dynamic interaction patterns. These patterns don’t replace everything we know about good UX—they build on it, expanding our toolkit for more adaptive, conversational, and context-sensitive experiences.
Here are the foundational patterns I believe will shape Just-in-Time UI:
1. Contextual Entry Points
What it solves: Users don’t always know what to do or ask. They need grounding.
Purpose: Give users immediate value without requiring them to articulate a question or request. Instead of starting with an empty input, Just-in-Time systems begin with a context-aware surface—a “home” or “hub” screen that reflects the user’s current situation, activity, history, current options etc.
This could look like:
A dashboard that changes depending on account status or recent activity
A subject-specific workspace (like lists, overviews, calendars etc.) with a summarised view
Prompts like “Need help with today’s tasks?” or “Want to plan a new project?”
These surfaces anchor users in the experience and allow for direct prompting without requiring articulation. From here, users can either act, expand, or enter a deeper interaction thread.
2. Interaction Threads
What it solves: Provide a dedicated, conversational space where users can get things done—ask questions, execute tasks, or clarify their needs.
Once an action is taken (e.g. “Explain this transaction” or “Set a new budget”), the UI opens a dedicated space for interaction to unfold—like a chat thread, a focused modal, or a contextual panel. This is where the AI can clarify, adapt, confirm, and deliver results—without taking over the entire interface.
Key attributes:
The thread has memory and evolves across turns
Interactions are constrained by scope, established by the entry point and/or evolving thread discussion.
UI widgets and controls can appear dynamically within the thread to support rich information display and interaction
Threads can be session-based or persistent, depending on complexity
3. Just-in-Time UI Widgets
What it solves: Deliver rich, task-specific interactions on demand without building out entire flows.
Instead of navigating to a standalone feature, users receive lightweight, embeddable UI elements directly within a conversation thread or starting point. Think of them like UI atoms—rich enough to interact with, small enough to inject anywhere.
Examples:
A product info widget
A settings control panel
A data chart with filters
A form to submit details
These widgets respond to user queries, adapt to context, and can be expanded into a “canvas” or full-view mode if needed.
4. Starter Prompts and Suggestions
What it solves: Combat the “empty page” problem and latent feature discovery.
Even adaptive UIs benefit from light scaffolding. Starter prompts, contextual CTAs, and recent actions surface common or contextually relevant queries as clickable suggestions or prewritten actions. This helps users who don't know where to start or how to phrase what they need.
Prompts can be:
Predefined (e.g. “Show me my recurring subscriptions”)
Contextual (e.g. “You’ve spent 20% more this week—want to review?”)
Personalised (e.g. based on account activity or common user workflows)
These lower the entry barrier by providing natural, intuitive starting points (“View last week’s summary,” “Adjust this setting”). They don’t require new UI—just preconfigured queries surfaced through familiar elements.
5. Blending Fixed UI with Just-in-Time UI
What it solves: Some features need predictability and permanence.
Not everything can—or should—be just-in-time. Core utilities like account management, security settings, and compliance flows benefit from stable navigation and structured UI. Some tasks require fast access or high trust. Offer traditional UI for core features while surfacing adaptive UI for context-specific needs.
The key is not to replace, but to layer:
Use fixed UI as stable entry points
Layer adaptive, thread-based flows within them to handle exploratory, nuanced tasks
Let users shift easily between structured space and flexible interaction
6. Multi-Session, Contextual Conversations
What it solves: Managing multiple needs, contexts, and conversations.
Two competing models emerge to allow users to context switch or manage conversations over time:
Single Thread (with contextual shifts): One conversation stream where each interaction is time-stamped and labelled. Keeps things lightweight but may lose clarity over time.
Multi-Thread (separate chat-like sessions): Each thread represents a distinct task or topic (e.g. “Invoice Follow-up,” “Team Card Settings”). Easier to resume or switch context, but introduces UI complexity.
From managing multiple projects in a productivity app to handling various customer service queries in a support platform, the model of managing conversation threads is widely applicable. It might be possible to combine both—offering one visible stream to users, but organising threads behind the scenes for system memory and session management.
7. Collaborative Canvas
What it solves: Complex, multi-step tasks that benefit from shared space and iterative collaboration.
Some interactions are too complex for linear threads. Collaborative Canvases are purpose-built environments for co-creation, analysis, or refinement. This mode is an extension of the dynamic thread, designed specifically for deep, iterative work.
Multimodal inputs allow users to, for instance, annotate documents, sketch ideas, or refine written content alongside AI-driven suggestions.
The interface supports rich interactions such as highlighting, commenting, and rearranging elements—as a means of directing action or focus within a persistent collaborative space.
It blends conversational elements with visual, interactive tools, making it easy to switch between dialogue and detailed content exploration. The workspace updating dynamically as the collaboration unfolds.
8. Session-Aware Personalisation
What it solves: Maintain memory and context over time, so the system feels aware and helpful.
Just-in-Time UI should feel like a companion. The system should remember previous interactions, adapt tone and suggestions, and link context across sessions.
9. Mode Switching
What it solves: Let the user (or system) shift into more specialised modes of interaction.
Adaptive systems need to respond in ways that are appropriate and empathetic to users needs. Some contexts and user intents are sensitive to tone and framing, some are open ended and creative. Mode switching adjusts tone, tools, and logic based on task type (exploration, execution, review, escalation), avoiding “one tone fits all” with greater nuance in how it responds.
For example, “Planning Mode” vs “Review Mode” vs “Crisis Mode” in a finance tool, each tuned for different needs and urgency.
These patterns aren't fixed, or even mature yet. They're concepts for flexible building blocks that can be composed in different ways depending on user needs, product complexity, and interaction modality.
Together, they offer a way to make Just-in-Time UI practical, intuitive, and empowering—without overwhelming users or product teams.
Where Just-in-Time UI Shines—and Where It Struggles
Just-in-Time UI isn’t a universal fit. Like any paradigm, it excels in certain contexts and stumbles in others. Understanding where it thrives (and where it doesn't) is crucial to applying it thoughtfully.
This isn’t just about interface preference—it’s about how adaptable systems align with the shape of human need.
When Just-in-Time UI Makes Sense
Complex, Cognitive Work
In domains where users are juggling multiple variables—data, regulations, options, consequences—Just-in-Time UI offers relief. It doesn't just surface tools; it offers guidance, makes connections, and carries some of the mental burden.
Example: A finance platform that helps business owners understand their runway, adjust spending, and navigate cash flow fluctuations in real time.
Exploratory or Open-Ended Goals
When users aren’t entirely sure what they need, structured flows can create friction. Just-in-Time UI gives users room to explore, while gently suggesting next steps. It enables a softer, more adaptive path forward.
Example: A creative tool that helps a user brainstorm, iterate, and refine ideas without forcing them through predefined templates.
Products with Diverse Users and Use Cases
One-size-fits-all navigation quickly breaks down in platforms that serve multiple personas. Just-in-Time UI adapts interactions to the person, the moment, and the goal—without requiring parallel design systems for every variation.
Example: A B2B product that serves business leaders, operations managers, and individual employees with tailored entry points, but a shared conversational foundation.
Multimodal Interfaces
Just-in-Time UI plays especially well in environments where conversation, charts, widgets, and documents need to coexist. It bridges the gap between modes of thinking—moving fluidly from talking to showing to doing.
Example: A customer support tool that highlights trends, suggests actions, and lets agents act—all from a single contextual conversation thread.
When It Might Not Be the Right Fit
Highly Repetitive, Transactional Tasks
Some workflows benefit from minimal friction, high-speed muscle memory, and absolute predictability. In these cases, a lightweight, fixed UI is often best.
Example: Tapping a button to reorder groceries or approve a recurring invoice.
That said, Just-in-Time UI can still play a role behind the scenes—surfacing opportunities, catching anomalies, or adapting tone—but the core interaction should stay simple.
Regulated, Safety-Critical Environments
In domains where ambiguity is dangerous—aviation, medicine, industrial control—systems must be rigid, auditable, and fail-safe. These interfaces are often built around standardisation, not improvisation.
Example: Flight deck software or real-time medical device monitors.
That said, adjacent tools—like training systems, diagnostics assistants, or documentation browsers—may still benefit from Just-in-Time principles.
Tools Solving a Single, Narrow Task
If your app does one thing, and does it well, layering in adaptive intelligence might not add much. The clarity of a familiar, fast UI can beat the flexibility of a conversational one.
Example: A mobile document scanner, stopwatch, or unit converter.
Even here, though, there’s room for enhancement—like understanding the user’s intent and automatically tailoring results (e.g. “optimise this scan for printing as a business card”).
It’s a Spectrum, Not a Binary
The truth is, most modern products live somewhere between rigid and fluid. And that’s where hybrid interaction models shine: clear, fast paths for known workflows; adaptive, conversational layers for ambiguity, decision-making, and context.
Just-in-Time UI isn’t a silver bullet—it’s a new tool in the design toolbox. One that invites us to rethink how, when, and where we meet the user. It doesn’t replace all flows—but it can make many of them feel more human, more helpful, and more alive.
Wrapping Up
Just-in-Time UI isn’t about replacing everything we’ve learned—it’s about building on it. It asks us to imagine new ways for software to be helpful, responsive, and attuned to the moment.
In many products, we’ve taught users to navigate complex structures and flows. But what if the product could navigate to them instead? What if interaction didn’t rely on remembering where a feature lives, but on being gently met with the right tool, at the right time?
This shift doesn’t mean abandoning structure. It means designing new kinds of structure—ones that are lighter, more adaptive, and more human. It means thinking less in terms of screens and paths, and more in terms of needs, context, and intent.
To design for this new paradigm, we might embrace some guiding principles:
Start with the moment, not the menu. What’s happening for the user right now? What could the product surface without being asked?
Make intelligence visible and useful. Let AI enhance clarity, not add mystery. Show the thinking. Offer choices. Invite collaboration.
Blend familiarity with adaptability. Pair grounded entry points with dynamic responses. Support discovery, not just completion.
Design for ambiguity. Anticipate uncertainty, offer scaffolding, and guide users toward what’s possible—even if they don’t yet know what to ask.
There’s still much to learn. This isn’t a finished pattern—it’s an emerging one. A shape that will evolve as teams build, test, and reimagine.
But what’s exciting is that this isn’t just about efficiency. It’s about empathy. About creating interfaces that listen better. That feel more alive. That work more like a partner, and less like a puzzle.
We don’t have to get it perfect from the start. We just have to stay curious, stay close to real needs, and keep asking:
What would this moment look like, if the product truly understood it?
That’s the promise of Just-in-Time UI. And we’re only just beginning to explore it.