Beyond the Linear: Designing Adaptive Experiences for AI
Why AI needs more than flows, and how to design for understanding and adaptability.
We’ve all grown accustomed to apps that rely on predictable paths. Need to transfer money? Follow a pre-set series of steps. Want to order lunch? Navigate through a defined flow. For the most part, UX design has been about flow: a need arises, the user picks a path, and the app nudges them forward—step by step, screen by screen—until the goal is reached. It’s a tidy, reliable system, built on the premise that users can diagnose their own needs, select the relevant feature, and execute their task. As designers, our role has been to shape these flows with care and craftsmanship, delivering utility through intuitive and predictable structure.
AI is changing how products behave, and demanding more from designers. These systems don’t follow the neat logic of static flows—they advise, they decide, they predict. Instead of waiting for users to navigate to the right feature, they generate contextual, real-time responses based on user input. This is incredibly powerful: it enables systems to skip straight to the heart of what the user needs without requiring them to find the right button, screen, or menu.
That flexibility is a gift—but it’s also a design challenge we’re not fully prepared to meet. If users can enter from any point, we can no longer rely on the assumption that they’ve completed previous steps, or use their point of entry to shape our understanding of their intent. If systems can generate responses on the fly, how can we have confidence that critical steps won’t be skipped—or that users will feel supported and understood in the process?
The gap between the tools we’re building and the design methods we use to shape them is growing uncomfortably wide. If we are to design more than just flows, and design products that can adapt reliably moment by moment and context by context, how can we map out these behaviours? The playbook we’ve relied on to help us build predictable, linear apps struggles to accommodate such non-linear variability. We need a new approach—one that trades static flows for contextual understanding, one that maps not journeys to conclusions but intentions to behaviours.
Over the next few articles, I’m going to explore design methods focused on documenting adaptable AI products and features. My thesis: By identifying key scenarios and building empathy with users needs, we can then define the system behaviours, attributes, and constraints that we believe will best serve them—specifying instructions for how our systems should identify these scenarios and how to appropriately respond. But first I’d like to pin down exactly why a new approach like this is necessary.
The Limits of Static Design
Imagine this: it’s Sunday morning, and you’re on your phone, casually scrolling through your finance app. You’ve been meaning to set up a savings plan for that trip you’re dreaming about, so you poke around until you find the budgeting feature. You start playing with numbers, testing what’s possible: If I save this much every month, how soon could I afford the tickets?
Now fast-forward to Wednesday night. A very different context, same app. You get a notification that a £500 charge was just made on your business account. Panic kicks in—you don’t remember making any big purchases today. Your first instinct? Open the app, find the transaction details, and figure out what’s going on. Is it fraud? A mistake? Should you call the bank? Freeze your cards?
In both cases, you’re using the same product. Its structure hasn’t changed: same home screen, same menu items, same linear flows. You diagnose your own situation, decide which feature you need, and navigate to it. And sure, if the app’s designers were particularly thoughtful, maybe it adjusts slightly to context—surfacing budgeting tools more prominently on Sunday when there’s no cause for concern, and flagging urgent actions on Wednesday when there is a suspicious transaction. But for the most part, you’re on your own, relying on the app’s predictable, pre-set flows to help you through two very different situations.
And for years, this kind of structure has worked just fine. Traditional apps were built on the principle that users know what they need: we, as designers, just needed to offer clear navigation, consistent structures, and a few thoughtful options to guide the way. A one-size-fits-all structure could serve countless scenarios, so long as users were willing to map their own intent to the available tools.
But AI changes the game.
What AI Makes Possible—and What It Demands
AI systems don’t work like static tools. They don’t rely on users to choose a predefined path or fit neatly into one of a product’s expected flows. Instead, AI has the ability to adapt in real time, generating responses, recommendations, or actions based on user input. It doesn’t just reflect what’s in the interface—it reshapes itself dynamically to meet the user where they are.
This flexibility unlocks incredible potential. Imagine a finance app that doesn’t just surface budgeting tools or fraud warnings—it adapts to the context of your situation. It might:
Shift into a creative planning mode on Sunday, encouraging exploration with playful “What if?” questions.
Switch to an urgent problem-solving mode on Wednesday, delivering fast, decisive actions to flag or dispute that £500 charge.
But this adaptability also brings significant design challenges. When AI systems generate responses dynamically, they need to feel intuitive, appropriate, and trustworthy. Designers must now account for questions like:
How can it infer whether the user is exploring possibilities or urgently solving a problem?
How should the system frame its responses when stakes are high versus low?
How does it balance confidence in its outputs with transparency about uncertainty?
Traditional UX approaches—linear flows and static journeys—don’t account for this kind of complexity. Instead, we need a new framework that prioritises scenario awareness: understanding the “why” behind the interaction and tailoring the system’s responses, modality, optionality etc. to fit.
Designing Scenario Awareness
When designing adaptive AI systems, the most critical challenge isn’t building smarter technology—it’s making it behave intelligently in the context of user needs.
In traditional UX, we focus on mapping user journeys—charting the steps someone takes to accomplish a task. It’s like plotting a route on a map: if the user wants to transfer money, the path is linear and predictable. Click “Transfer,” choose an amount, an account, and confirm. Easy.
But adaptive systems throw that tidy logic out the window. Instead of following a single path, users can drop into the system from any angle, bringing with them a world of ambiguity. Maybe they don’t even know what they need. Maybe their situation is urgent or deeply personal. Or maybe they’re just here to explore. Adaptive systems don’t just need to react to this—they need to anticipate it.
That’s where scenario mapping comes in. Instead of mapping static paths, we can map moments of interaction and the relevant considerations that frame how the system should behave. Scenarios let us design for the why behind the interaction, not just the what. This creates clarity for designers, developers, and the system itself by codifying:
What is the user trying to accomplish? What is their intent?
How much is riding on the outcome? What are the stakes?
What else is happening in their world? What context should we consider?
Let’s break these down.
User Intent: Why are they here? Are they exploring? Solving a problem? Making a decision? Intent shapes the system’s tone, mode, and focus. A creative tool might suggest playful ideas when the stakes are low, but if the user’s making a life-altering choice, the same tool should slow down, get serious, and surface options carefully. By understanding intent, we can intentionally design different ‘modes’ that incorporate instructions tailored to what the user wants to achieve.
Stakes: A casual query—“What’s a good book to read?”—has low stakes. But a high-stakes question—“What should I do about this suspicious charge?”—demands precision, confidence, and next-step guidance. Adaptive systems need to recognise these stakes and adjust their behaviour accordingly.
Context: What else is happening? Context includes the broader situation surrounding the interaction, such as urgency, user behaviour, or environmental signals. Designing for context requires systems to interpret these signals and adjust their behaviour dynamically—something static flows can’t account for. For example:
In a car, a voice assistant might interpret a request for directions differently if the user is running late for a meeting in their calendar vs. on vacation in a new location.
A finance app could detect that a user is scrolling through transactions after a flagged notification and infer they’re investigating a potential issue.
By mapping these elements, designers can document not just what the system should do, but how and why it should respond in a particular way.
How Scenario Mapping Differs from Traditional UX Tools
Traditional UX tools like user flows and task analysis are valuable for linear systems, but adaptive systems require an additional layer of mapping to address their dynamic, probabilistic nature:
User Flows vs. Scenarios: In traditional UX, flows are structured around predictable actions (“what the user needs to do”). Scenario mapping focuses on understanding user goals, stakes, and context (“why they’re here and what’s at stake”) and then link those to defined system behaviours.
Predefined Steps vs. Real-Time Adaptation: Linear flows assume users will follow a predefined sequence of steps. Adaptive systems must adjust dynamically, meaning designers must anticipate a wider range of possibilities.
Clarity in Task vs. Ambiguity in Intent: In traditional systems, tasks and paths are explicit. If you’ve navigated to the ‘Make a transfer’ feature, your purpose is clear, and the safeguards to prevent user error or harm can be designed as inescapable steps in this flow. Adaptive systems must infer intent, stakes, and context—often from incomplete, ambiguous, or implicit signals—and then ensure the right steps are taken.
The Power and Pitfalls of Adaptation
When AI systems adapt well, they can deliver moments of trust, delight, and deep utility. A healthcare app that modifies it’s tonality to calmly guide a user through a high-stakes situation, or a financial tool that switches to a suggestion mode to offer a range of tailored savings strategies, doesn’t just meet expectations—it meets users where they are.
But this adaptability doesn’t come for free, it must be intentional. Without clear instructions on how to adapt to intent, stakes, and context, the experience can fall apart.
What Happens When AI Gets It Wrong?
Irrelevant Responses: Without understanding context, systems risk delivering generic outputs that feel irrelevant or careless.
Example: A brainstorming tool suggesting random ideas when a user is clearly making a high-stakes decision.
Overtrust and Misinterpretation: AI outputs are often perceived as authoritative, especially when responses are fluent or confident. Without proper framing, this can lead to dangerous overtrust.
Example: A system suggesting, “You can afford this vacation,” without clarifying how it reached that conclusion.
Missed Opportunities for Engagement: Systems that don’t adapt to user behaviour or context fail to create meaningful, personalised experiences.
Example: A savings tool that doesn’t tailor guidance to a user’s spending habits feels impersonal—and ultimately less useful.
Potential Harm or Overreach: Systems that do not predictably enforce adequate protections, align accurately with user intentions, or gather the correct authorisations or consent before acting, risk causing harm to users or failing regulatory or legal requirements.
Example: A legal advisory tool that doesn’t adequately defer to specialists or rate confidence in its answers exposes users to potential harm.
The stakes are even higher in AI-powered tools because users often can’t see how decisions are being made. If a system’s behaviour feels misaligned with their needs, users lose trust—and regaining it is difficult.
The Adaptive Opportunity: Beyond One-Size-Fits-All
While traditional UX excels at delivering predictable, structured experiences, we’ve shown that one-size-fits-all design doesn’t work for AI. To ensure relevant behaviours, adaptive systems must reshape themselves dynamically to reflect the stakes of the moment, the user’s intent, and the surrounding context.
Without careful design, these systems risk delivering generic, irrelevant, or even harmful responses. To make the most of AI’s adaptive potential, we need a different way of thinking.
This is where scenario mapping becomes our secret weapon. It lets us design systems that don’t just respond but respond appropriately.
Examples Across Industries, Modalities, and Intent
The magic of scenario mapping is that it applies everywhere. Whether you’re building a dashboard for sales managers, a companion for fitness enthusiasts, or smart presets for photographers, these principles remain the same. Adaptive design isn’t tied to a specific industry or interface—it’s about aligning the system’s behaviour with the needs of the moment.
Let’s take a tour through some examples to see how scenario mapping plays out across industries, products, and modalities:
Example 1: A Sales Analytics Platform
Industry: Sales management for mid-sized organisations. The target user is a sales manager responsible for driving revenue across multiple channels.
Product Value: The platform helps users save time, maintain real-time visibility into performance, and identify critical opportunities for growth.
Interaction Modality: Embedded insights within a traditional UI dashboard. The system is displayed alongside performance charts, augmenting them with plain-text messages that highlight key insights or propose next steps.
Challenges and Considerations:
Intent: Is the manager exploring overall performance, focusing on a single underperforming product, or investigating an unusual revenue dip?
Stakes: A routine quarterly review has lower stakes than a sudden revenue drop or a key account renewal in jeopardy.
Context: What time of month or quarter is it? Are external signals (e.g. economic downturns, seasonality) affecting sales?
Adaptive Behaviour:
For low-stakes intent: Provide high-level summaries like “Revenue is up 12% this quarter—driven by strong channel growth.”
For high-stakes intent: Offer detailed diagnostics and actionable insights, such as: “Revenue is down 15% this week. Contributing factors include reduced repeat business (-8%) and fewer new customers (-7%). Would you like to review customer churn data?”
In urgent contexts (e.g. an unexpected drop during peak sales season): Suggest immediate actions like “Launch a flash promotion in your highest-performing channel.”
Here, the system adapts its tone, focus, and depth to the stakes at hand—turning raw numbers into actionable insights.
Example 2: A Fitness Tracker
Industry: Consumer health and fitness. The user could be anyone from a casual jogger to a dedicated marathon runner.
Product Value: Helping users stay motivated, track progress, and achieve fitness goals safely.
Interaction Modality: Contextual recommendations delivered without explicit input, responding to behavioural and biometric signals.
Challenges and Considerations:
Intent: Is the user intentionally pushing themselves toward a performance milestone, or are they unaware of potential health risks?
Stakes: Encouraging higher effort during routine runs is low-stakes, but detecting overexertion or an irregular heartbeat during intense activity is high-stakes.
Context: Environmental factors (e.g. extreme heat or humidity) or time of day may influence the system’s recommendations.
Adaptive Behaviour:
For low-stakes intent: “You’ve been consistent this week—great job! Want to aim for a faster pace tomorrow?”
For high-stakes scenarios: “Your heart rate has spiked unusually high. Slow down and take a break to avoid injury.”
In context-aware scenarios (e.g. hot weather or a steep incline): “Take it easy—hydration is key when running in these temperatures.”
Here, the system reads the room—well, the run—and shapes its responses accordingly.
Example 3: A Photo Editing Tool
Industry: Creative software for professional photographers and casual users alike.
Product Value: Simplifying complex editing processes for professionals while offering accessible, one-click solutions for casual users.
Interaction Modality: Embedded AI tools, like auto-enhance filters and smart retouching options, that the user can optionally trigger to override their current settings.
Challenges and Considerations:
Intent: Is the user a professional seeking precise control over edits, or a casual user looking for quick fixes?
Stakes: Casual editing for a social media post has low stakes, but preparing a high-resolution print for a client demands precision and flexibility.
Context: Is the photo an everyday snapshot or part of a professional portfolio?
Adaptive Behaviour:
For low-stakes users: Auto-enhance the image with minimal user input, e.g. “Applied a warm tone and adjusted brightness—click to undo or tweak further.”
For high-stakes users: Provide fine-grained control: “Here’s a histogram adjustment. Would you like to refine tone curves or apply masking?”
In context-aware scenarios: Adjust recommendations based on metadata (e.g. overexposure detected in a sunny outdoor photo).
Example 4: A Factory Control System
Industry: Industrial automation and manufacturing. The user is a factory floor manager responsible for equipment uptime.
Product Value: Ensuring machines run efficiently, reducing downtime, and addressing faults proactively.
Interaction Modality: An AR headset providing live updates and suggested actions in response to machine performance data, overlayed onto machines in the users’ field of view.
Challenges and Considerations:
Intent: Is the user proactively monitoring routine operations or reacting to a flagged malfunction?
Stakes: Low stakes for a machine running slightly below efficiency; high stakes for a critical fault threatening production.
Context: Are multiple machines showing irregularities, suggesting a systemic issue? Does historical data suggest a pattern of common faults?
Adaptive Behaviour:
For low-stakes intent: “Machine X’s efficiency has dropped 5%—consider scheduling maintenance this week.”
For high-stakes intent: “Machine Y has stopped functioning. Would you like to trigger an emergency shutdown or escalate to the technician?”
In context-aware scenarios: Highlight patterns suggesting broader problems, e.g. “Three machines are showing similar anomalies. Review system-wide diagnostics?”
Example 5: A Ride-Hailing App
Industry: Transportation and mobility. The user could be a commuter, traveller, or someone in an emergency.
Product Value: Getting users where they need to go quickly, affordably, and safely.
Interaction Modality: A mobile app that dynamically suggests routes and transportation modes based on the entered destination.
Challenges and Considerations:
Intent: Is the user planning ahead, seeking the fastest route, or responding to an urgent need?
Stakes: A routine commute is low-stakes, but an running late to a client meeting is high-stakes.
Context: Is the user running late for an appointment, are they in a new and unfamiliar location, or requesting a ride at a busy time or in poor weather conditions?
Adaptive Behaviour:
For low-stakes intent: “Here are the cheapest rides available. Would you like to add a stop?”
For high-stakes intent: “The fastest ride is 7 minutes away. Shall we confirm?”
In context-aware scenarios: Adjust recommendations dynamically, e.g. “Traffic is unusually heavy—would you like a faster walking route to the station?”
Example 6: A Customer Support Chatbot for a Finance Tool
Industry: Personal and business finance. The user could be a sole trader, a CFO of a small company, or an individual managing their household budget.
Product Value: Provides instant access to financial support, delivering factual information (such as balances), guidance (like savings strategies), and actions (e.g. freezing a credit card) via an accessible chatbot interface.
Interaction Modality: Open-ended, free-form text input allows users to ask anything—ranging from low-stakes queries (“What’s my balance?”) to high-stakes situations (“What should I do about this £500 charge?”).
Challenges and Considerations:
Intent: Does the user need factual information, creative exploration, or problem-solving?
Stakes: Is the query exploratory (“How do I improve cash flow?”) or urgent (“This charge is fraudulent—help!”)? Do they want to take an irreversible account action (“Transfer £500 to Dejana.”)?
Context: What’s the current status of their account? Has there been any unusual activity? What are their typical spending habits? The list goes on.
Adaptive Behaviour:
Factual Mode: Provide a clear, precise answer when confidence is high (e.g. account balances or due dates).
Advisory Mode: Offer clarifying questions or optionality when responding to ambiguous advice-seeking queries (e.g. savings tips or financial strategies).
Problem-Solving Mode: Prioritise urgent, actionable steps in high-stakes scenarios (e.g. freezing a card or escalating fraud alerts).
Transparency and Confidence: Communicate the confidence of its advice and clarify when users should take additional steps (e.g. consulting a financial advisor).
Here, the system adapts not only its tone and content but its level of transparency and deference—balancing empowering users with protecting their wellbeing.
These examples demonstrate the universal relevance of scenario-aware design. Across industries, platforms, and modalities, the principles remain the same: understand the user’s intent, weigh the stakes, and design for context.
Modality and Ambiguity: How Input Shapes Adaptive Complexity
In the non-linear experiences of AI products, we can’t rely on the predictable assumptions about user intent that are the cornerstone of linear flow UIs. We must instead infer intent, or establish it via contextual data or direct enquiry to the user.
But not all AI products are created equal when it comes to input and interaction. Some tools operate within tightly defined boundaries, with constrained input options that limit this ambiguity surrounding user intent. Others, by design, invite open-ended queries that demand significant inference and adaptation. As designers, we need to understand how these differences affect the complexity of our work.
At the heart of this complexity lies input modality: the way users communicate with the system. Whether they’re selecting from a predefined list, typing a free-text query, or simply relying on implicit signals like their location or behaviour, the modality shapes how easily the system can infer user intent, stakes, and context.
Constrained Input (Low Ambiguity): Systems triggered by explicit actions or with predefined menus are easier to map because the range of possible scenarios is limited. For example, our embedded sales dashboard widget that enhances a bar chart with insights operates in a defined space: the user’s action (viewing the chart) and intent (seeking performance insights) are relatively clear. The design challenge lies in enriching the experience with contextually relevant insights and actions.
Free-Form Input (High Ambiguity): Systems that accept open-ended inputs, like chatbots or voice assistants, need to handle a far greater range of possibilities. A single query—“How can I improve cash flow?”—could mean anything from curiosity to a financial emergency, depending on the context. The system must infer intent (or take actions to find it, such as asking clarifying questions and/or checking recent activity) and dynamically adjust its behaviour to provide a relevant and helpful response.
Implicit Input (Context-Only): Some systems don’t rely on explicit user input at all, instead adapting to environmental or behavioural signals. A fitness tracker, for instance, might notice a user running at an unusual pace and decide whether to encourage them (“Great progress!”) or warn them (“Slow down to avoid injury”). Here, the design challenge shifts to anticipating context and ensuring the system responds appropriately to incomplete or ambiguous signals.
Input Modality and Scenario Mapping
The expansiveness of the scenarios we need to map correlates directly to the openness of the input modality. Constrained systems require fewer scenarios but may need more depth in their specific contexts (such as tailoring insights to different user roles, or performance trends). Open-ended systems, on the other hand, demand broader scenario mapping to account for ambiguity in user intent.
Let’s consider the two extremes of this spectrum:
Example 1: The Embedded Sales Dashboard Insights Widget
Modality: Constrained, the user is already viewing a specific performance chart.
Ambiguity: Low, the system knows the user is focused on sales performance and can surface relevant insights directly.
Scenario Mapping Focus: Tailoring responses to different user roles (e.g. salesperson vs. manager) and their different priorities (personal vs. team performance). We can also anticipate possible high and low stakes scenarios (like quarterly reporting vs. crisis response) and how the might combined with possible contexts like historical performance trends or sudden changes to tailor the insights and options presented.
Example 6: The Finance Chatbot
Modality: Open-ended, the user can ask any question, spanning factual queries, advice, and problem-solving.
Ambiguity: High, the system must infer intent from vague or multi-layered inputs.
Scenario Mapping Focus: Users could be approaching the system with an incredibly diverse range of intents (information seeking, decision-making, creative exploration, problem-solving, task completion, etc.), stakes (potential financial losses, exposure to fraud, etc.), and contexts (changes in spending behaviour, recent account actions, pending requests or enquiries, etc.). Scenario mapping would need to detail a large range of scenarios and define appropriate responses, considerations, and guidance given different possible contexts.
The prospect of designing adaptation for such open-ended products like customer service chatbots can feel daunting. But when you zoom into specific scenarios, they can look a lot like those of the simpler, more constrained modalities. The secret: address scenarios methodically, one at a time, to slowly build out more capabilities. This is where scenario mapping can shine—helping teams to gain perspective on the system they’re building, and break it down into smaller deliverable pieces.
Scenario Mapping in Action
Designing for adaptive AI can feel like trying to tame chaos. These systems generate responses on the fly, driven by probabilities and patterns we can’t fully predict. How do you bring clarity and intentionality to something so inherently dynamic?
Mapping scenarios offers us a structured approach that helps us define system behaviour across a wide range of user needs, stakes, and contexts. Think of it as a blueprint for adaptation—a way to translate the unpredictability of AI into thoughtful, scenario-aware design.
In my next article, I’ll explore the five fundamental steps of scenario mapping:
Identify User Intents
Start by defining the broad goals users are likely to bring to the system. Are they here to explore options, solve a problem, or seek information? This should feel familiar to anyone practicing user-centred design—it’s about asking why the user is engaging in the first place.Group Intents into Scenarios
Once you’ve identified common intents, group them into high-level categories that reflect the nature of the interaction. These might include exploration, decision-making, task completion, or even crisis response.Determine Stake Levels
For each scenario, assess the stakes of the interaction. Is this a low-stakes task where playful suggestions are welcome, or a high-stakes situation that demands precision and care? These stakes should inform how the system responds.Consider Context
Document the environmental, behavioural, or emotional signals that might influence user needs—and consider how your system can recognise these signals. Are they derived from user behaviour, like account history or scrolling through transaction details? Or external factors, like time of day or urgency?Define System Behaviours
Finally, for each scenario, specify how the system should adapt. What mode should it switch to? How should its tone and guidance shift? Does it need to surface confidence, offer optionality, or escalate to human support? This step aligns system behaviours with intent, stakes, and context—ensuring users feel understood, supported, and in control.
Why This Matters
AI systems have unlocked incredible potential: tools that anticipate needs, clarify ambiguity, and adapt dynamically to users. But to unlock this promise, designers need to go beyond flows and journeys. We must build scenario-aware systems—tools that don’t just respond, but actively reshape themselves to meet the user where they are.
The work of adaptive design will separate the great AI products from the mediocre ones. And in this competitive landscape, adaptability isn’t just a feature—it’s a foundation for creating trust, clarity, and delight.
The future of AI isn’t just about building smarter systems—it’s about designing better experiences. Let’s start shaping them.
Signal Path
AI is reshaping the way we design—our products, our tools, our jobs. Signal Path is a weekly exploration of the challenges and opportunities in making AI products intuitive, trustworthy, and a little more human. Written by Andrew Sims, a design leader working in Fintech, it’s for designers, product thinkers, and technologists grappling with AI’s impact on their work and the products they make. I’m not claiming to have all the answers—this is a way to think through ideas, articulate challenges, and learn as I go. Thank you for joining me as I navigate this path.