AI Design Beyond the Interface: How Designers Can Shape AI Behaviour
Great AI product design isn’t just about UI, it’s about teaching our products to think critically.
When AI acts unpredictably, the problem isn’t the interface—it’s the thinking underneath. And thinking is something we can design.
AI isn’t just another feature inside a product; it’s an active presence that shapes the experience as much as the interface around it. But here’s the problem: we often treat it like a black box, a machine behind the curtain, spinning answers from who-knows-where. It feels like magic—until it isn’t.
But, when we stop thinking of AI as a monolith and instead design it as a system of structured roles and workflows, we bring its behavior into focus. We decide how it responds, when it escalates, and where it draws the line. We shape not just how AI appears, but how it reasons, how it earns trust, and how it serves people in the right way at the right moment.
This is the next frontier of AI design: moving beyond making containers for AI and instead shaping its presence, its boundaries, its very thought process.
This is where designers aren’t just well equipped to help, it’s where designers are essential.
What We Teach, What We Build
In high school history class, I learned an important lesson: not all sources are created equal. We were taught to distinguish between primary and secondary sources, to consider the provenance of a document, and to question what perspectives or biases might be shaping the narrative. A battle report written by a general might look authoritative, but what if it was written to justify a poor decision? A news article might seem factual, but what if it was repackaged from another source with a different agenda?
This wasn’t just about history—it was an introduction to critical thinking. Before accepting information, you first consider where it came from, what processes shaped it, and whether it stands up to scrutiny.
Years later, AI presents us with the same challenge: not all answers are created equal.
From the outside, AI systems appear seamless. You type a request, and a response appears. No visible steps, no checks and balances—just an answer, as if summoned from nowhere. It’s easy to assume AI is some kind of monolith, a vast intelligence that “just knows” things.
But that history lesson was well taught, and hard learned. Just because it’s confident, doesn’t mean it’s correct. And just because it’s seamless, doesn’t mean it’s thorough.
Shaping Thought, Shaping Systems
Just as we teach students to think critically—to examine sources, weigh perspectives, and apply reasoning—we need to shape AI systems to think in structured, intentional ways.
If an AI system is going to help someone make a financial decision, diagnose an illness, or understand a legal contract, it can’t just “generate an answer.” It must follow a process:
Where does this information come from?
What checks are in place to prevent mistakes?
What assumptions are being made?
What level of confidence should be communicated?
What options should be presented—and what should be explicitly avoided?
These aren’t questions that can be left to chance. Just like we teach critical thinking to students, we must design critical processes into AI systems to ensure they generate responses that are appropriate, predictable, and reliable.
Because in the highest-stakes use cases—the ones where AI has the greatest potential for transformation—a misstep isn’t just inconvenient. It can have real, irreversible consequences for users.
This isn’t just an engineering problem, it’s a design problem.
How Do You Tame a Black Box?
Imagine you’re in a library. Not just any library—a vast, endless one, with shelves stretching into the distance. Somewhere in this library is the exact piece of information you need, but its vastness is daunting.
Now imagine that instead of searching for it yourself, you call out into the air, and a voice responds instantly with an answer.
That’s how AI often feels—like a disembodied intelligence, reaching into some vast knowledge system and plucking out the perfect response. But is that really what’s happening?
Not quite.
A Mashup of Knowledge, Not a Single Truth
Large Language Models (LLMs) don’t retrieve one definitive source of truth—they generate a response based on probabilities. When you ask a question, the model isn’t consulting a single book on a shelf; instead, it’s synthesising patterns across everything nearby, blending insights from documents, articles, conversations, and other texts it has been trained on.
But here’s the problem:
Some of those sources are authoritative and well-researched.
Some are out of date.
Some are fiction.
Some reflect cultural biases or information gaps.
So what you get back isn’t a clean fact, but an amalgam—a fuzzy assemblage of possibly relevant information, weighted by probability but not by reliability.
This makes LLMs incredibly dynamic and versatile—they can adapt language, rephrase ideas, and draw connections across disciplines. But it also makes them unpredictable. Because they don’t fact-check or verify sources the way a human researcher would, they can confidently generate:
Outdated or incorrect information
Well-written but entirely fictional claims
Biased or misleading perspectives
This unpredictability is the defining characteristic of thinking about AI as ‘one big thing’—a vast intelligence that seems all-knowing, but in reality, lacks structure, verification, and explicit constraints.
The Alternative: AI as a System of Roles
Now imagine an alternative:
Instead of a single, amorphous black box, imagine AI designed as a structured system of specialised roles.
Instead of guessing what’s true, it follows intentional steps to verify and validate responses.
Instead of overgeneralising, it is structured into distinct roles that specialise in different tasks.
Instead of making unpredictable decisions, workflows ensure responses are clear, explainable, and constrained.
This shift in how we think about AI transforms how we approach its design. For designers, this means we’re not just shaping the interface—we’re defining the thought process itself.
If we think of AI as a black box, we design systems that can overreach, misrepresent themselves, and make decisions in ways we can’t explain.
But if we design AI as a structured system, we can shape its thought process—ensuring that it follows intentional steps, applies necessary checks, and only delivers responses that are appropriate, predictable, and reliable.
From Chaos to Coordination: AI as a Kitchen
If treating AI as a black box creates unpredictability, then what’s the alternative?
Think of a restaurant kitchen.
In a busy, well-run kitchen, orders come in, dishes go out, and everything runs like clockwork. But this isn’t because there’s one all-knowing chef doing everything at once. It’s because the kitchen is a structured system of specialised roles, coordinated by an orchestrator.
The head chef (orchestrator) oversees the operation—assigning tasks, ensuring everything is prepared in the right order, and stepping in when needed.
The line cooks (specialised AI roles) handle specific jobs—grill station, sauté station, pastry chef, bartender. Each has a clear responsibility.
The workflow (the kitchen process itself) ensures every dish is made step by step, following a clearly defined and repeatable process, with clear quality control before it reaches the customer.
Imagine what would happen if a kitchen lacked this structure:
Orders would get mixed up.
The pastry chef might start grilling steaks.
A bartender might try to run the whole kitchen, making everything taste a little like gin.
Some dishes would be rushed, others forgotten, and no one would know whether the food was safe to serve.
This, of course, is exactly what happens when AI systems lack structure:
The system oversteps—offering advice when it should retrieve facts.
Critical steps get skipped—leading to errors, misinformation, or missing context.
The AI’s behaviour becomes unpredictable—changing tone, making inconsistent decisions, or failing to disclose uncertainty.
Instead of one big, mysterious entity, a structured AI system works like a restaurant kitchen:
An orchestrator classifies the request and directs it to the right AI role.
Workflows ensure each step is executed correctly before responses are delivered.
Specialised AI components handle different tasks—retrieval, reasoning, summarisation, validation.
Nobody in the kitchen is expected to do everything. The pastry chef doesn’t throw a steak on the grill. The line cook doesn’t mix martinis. When things work well, it’s not because one person knows how to do it all—it’s because the system is well-structured, with defined roles that work together.
A good AI system should work the same way.
Workflows?
An AI workflow is a structured sequence of steps that an AI system follows to generate an appropriate response. Rather than treating AI as a single entity that 'thinks' holistically, workflows break down tasks into specialised steps—each handling a part of the process. These workflows ensure AI behaves predictably, applies the right checks, and adapts its responses based on the stakes of the situation.
Each step can be performed by a specialised AI component, tool, or rule-based system, depending on what’s needed. Some steps might involve retrieving factual data, while others involve reasoning, summarisation, validation, or even escalating the request to a human when needed.
Types of AI Workflows
AI workflows vary depending on the task. Here are a few common structures:
Linear Workflows – Sequential steps, useful for structured tasks like customer verification or troubleshooting.
Decision Tree Workflows – AI routes a request based on predefined conditions, like categorising support tickets.
Iterative Workflows – The AI refines its response through loops, like a writing assistant improving a draft.
Orchestrator-Worker Workflows – A central AI delegates tasks to specialised sub-models or tools, like a research assistant retrieving and summarising reports.
The key takeaway? AI shouldn’t be thought of as one big mind—it’s a system of coordinated steps. Workflows help us design AI that thinks before it speaks, checks before it acts, and adapts to user needs with clarity and control.
What This Means for AI Design
Designing user experiences for AI products can feel like creating a container—the interface that surrounds the system. We craft chat windows, dashboards, and interaction flows that let users engage with an underlying intelligence that can feel, in many ways, mysterious. The AI is the engine under the hood, and our job is to make the dashboard and steering wheel—the UI that made it usable.
This is important work. The challenge of designing intuitive and usable AI interfaces—ones that set expectations, disclose uncertainty, and create trust—is complex, fascinating, and still emerging. But if designers move away from the idea of mysterious black boxes, and instead think of our AI systems as structured workflows, something interesting happens.
Instead of designing containers for AI, we have the opportunity to design AI itself.
From Containers to Co-Creation
Once we see AI as structured workflows rather than a black box, designers step into a new role:
Defining AI behaviours: Not just how AI appears, but how it thinks, responds, and adapts in different situations.
Shaping outputs: Not just presenting AI-generated content, but structuring how it’s created, validated, and communicated.
Building constraints and safeguards: Not just making AI accessible, but ensuring it operates safely, predictably, and within defined limits.
This is a shift from designing the frame to designing the process.
Where before we designed how users access AI, by envisioning workflows we can design how AI engages with users—what it should say, how it should behave, and what processes it should follow before reaching a conclusion.
Designing Thought, Not Just Interfaces
This means applying familiar design skills—mapping user journeys, understanding context, defining interaction models—but to an entirely new domain: the inner workings of AI.
Instead of mapping user flows, we map AI workflows—how information is retrieved, processed, verified, and delivered.
Instead of designing navigation, we design orchestration—how requests are routed to, and transitioned between, specialised AI roles.
Instead of defining UI patterns, we define behavioural patterns—how AI should respond differently based on user intent, stakes, and context.
AI systems don’t just need good UI—they need good thinking. And designers are uniquely positioned to help shape it.
The Opportunity in Front of Us
This shift is exciting because it expands what it means to be a designer in AI.
We’re no longer just shaping the surface—we’re co-creating the system itself.
The best AI experiences won’t come from throwing a powerful model behind a sleek interface. They’ll come from intentional, structured, well-designed workflows that ensure AI is reliable, explainable, and aligned with user needs.
That’s not just a UX challenge.
That’s a systems design challenge. A behaviour design challenge. A thinking challenge.
And for designers, it’s an opportunity to shape the future of AI—not just how it looks, but how it works.
Why AI Structure Matters
When AI is treated as a monolithic black box instead of a structured system with clear roles and workflows, it becomes unpredictable. It can overreach, giving advice when it should only retrieve facts. It can skip critical steps, failing to confirm information before taking action. It can misrepresent itself, switching tones and behaviours unpredictably.
These failures don’t just break the user experience—they erode trust. When AI behaves inconsistently, users don’t know what they’re dealing with. They hesitate, second-guess, and eventually disengage.
UX Implications of Poor Structure
1. Acting Beyond Its Role
Without defined roles and constraints, an AI can mix factual retrieval with speculative reasoning—creating an answer that sounds confident but isn’t appropriate.
A user asks: “Should I take out a loan?”
A well-structured AI would recognise this as a decision-support request, provide neutral pros and cons, and suggest consulting a financial advisor.
A poorly structured AI might generate what sounds like financial advice, overstepping its role and misleading the user.
2. Jumping to an Answer Too Soon
Traditional UX enforces critical steps through linear flows—AI doesn’t. If workflows aren’t explicitly designed to ensure due diligence, important guardrails get skipped.
A user asks: “Freeze my card.”
A well-structured AI would confirm the user’s identity, explain consequences, and ask for final approval.
A poorly structured AI might immediately execute the request—leaving the user locked out of their own finances.
3. Misrepresenting Itself
AI doesn’t “know” its own limits unless we design them in. Without structure, a language model can confidently generate outputs that exceed its intended function.
A user asks: “What should I invest in?”
A well-structured AI would frame its response carefully: “I can’t provide investment advice, but I can summarise trends from the market.”
A poorly structured AI might generate investment recommendations without disclosing limitations, making it seem more authoritative than it really is.
4. Generating Unreliable Answers
AI models don’t inherently fact-check themselves. Without structured workflows that introduce retrieval, validation, and verification steps, there’s no way to guarantee correctness.
A user asks: “What’s the best treatment for migraines?”
A well-structured AI would retrieve answers from trusted medical sources and include disclaimers.
A poorly structured AI might hallucinate a response, sounding authoritative even if the information is incorrect.
The Benefits of Structure
AI systems built as workflows rather than monolithic, all-purpose models benefit from several UX, architectural, and practical benefits. Workflows structure decision-making, improve reliability, reduce cost, and increase modularity.
1. Efficiency: Doing More with Less
Large AI models are expensive and slow, especially for multi-step tasks. Workflows break down interactions into structured steps, ensuring that only the necessary components are used.
A lightweight model can classify a request before engaging a more complex AI.
If a simple lookup solves the problem, there’s no need to call an LLM at all.
Responses are refined step by step, reducing wasteful over-processing.
2. Reliability: Reducing Errors, Avoiding Hallucinations
LLMs don’t fact-check—they generate plausible responses, correct or not. Workflows can introduce validation steps, checks, and escalation paths to prevent errors.
Responses can be checked against a database before reaching the user.
Multi-step decision-making ensures AI doesn’t compound low quality outputs.
If confidence is low, the AI can escalate to a human or falls back to rules-based logic.
3. Interpretability: Making AI Decisions Transparent
Users (and regulators) need to understand why an AI made a decision—but LLMs are black boxes. Workflows break down decision-making into clear steps that can be explained, reviewed, and improved over time.
Every step is recorded, allowing traceability of decisions.
Instead of a single opaque answer, confidence scores and sources can be shown.
Engineers can adjust workflow components without retraining an entire model.
4. Modularity: The Power of Specialisation
Instead of relying on one massive model, workflows let us combine specialised AI components, each optimised for a different task.
A retrieval model finds knowledge, a reasoning model structures insights, and a text-generation model crafts responses.
Components can be updated or swapped without rebuilding everything.
5. Safety: Guardrails and Ethical Constraints
If we’re to create products that have a positive impact on the world, AI must comply with legal, ethical, and regulatory standards. Workflows enforce safeguards to prevent biased, misleading, or harmful outputs.
AI responses can be filtered for bias or misinformation before users see them.
Certain actions can require explicit user approval before execution.
If AI encounters ambiguity or risk, it can pause and escalate to a human.
6. Scalability: Growing Without Breaking
Monolithic AI systems become expensive and brittle as they scale. Workflows distribute tasks efficiently, allowing AI to grow without losing reliability.
Simple tasks go to lightweight models, while complex ones go to larger models.
If one model fails, the system reroutes to backups or alternate workflows.
Instead of overloading one AI, multiple AI agents share responsibility.
How Designers Can Define AI Workflows
The shift from monolithic AI to structured workflows isn’t just a technical challenge—it’s a design challenge. And to solve it, designers can use a familiar approach: work backwards from user needs to define the structure of an AI system.
Instead of asking, "What can the AI do?", we start by asking:
What is the user trying to achieve?
What’s at stake in this interaction?
What kind of response is appropriate for the situation?
What process should the AI follow before providing an answer?
What should the AI be explicitly prevented from doing?
By defining these parameters first, we create structured workflows that shape how AI behaves before it generates a response—ensuring outputs are appropriate, predictable, and reliable.
The Three Steps for Structuring AI Workflows
To move from an amorphous, unpredictable AI system to one that is structured, reliable, and aligned with user needs, we need to break down its behaviour into intentional roles and workflows.
This process helps us start from user needs and organically define structured AI workflows into well-defined AI behaviours.
Step 1: Identify the Most Likely or Critical Scenarios of Use
Not all AI interactions are equal. Some are low stakes and exploratory (like “What are some ways to improve cash flow?”), while others are high stakes and require precision (for example, “Why was £500 charged to my account?”). In the same way that we first explore user stories or jobs to be done before jumping into defining user journeys and flows, mapping out these scenarios ensures that we’re designing for real user needs.
We start by answering questions like:
What are the key scenarios where users will interact with this AI?
Which scenarios are the highest stakes, requiring structured responses?
Are there edge cases where the AI could cause harm if it oversteps or makes errors?
Example use case from a finance assistant AI:
Understand an unexpected charge: As an account holder, I want to understand why I received this charge so I can determine if it’s legitimate or needs to be disputed.
By identifying these scenarios upfront, we ground our thinking in real situations the product is likely to face, and we can start to define roles tailored to best serve them.
Step 2: Identify the Kinds of AI Roles Needed
Once we understand the scenarios, we designate AI roles that are responsible for handling different types of interactions.
Roles can be broad or highly specific, depending on the complexity of the product and how much specialisation is needed. The key is ensuring each role has clear boundaries so that AI doesn’t mix tasks inappropriately.
Some AI roles may:
Retrieve factual information - e.g. “What’s my account balance?”
Summarise and explain complex topics - e.g. “How do tax deductions work for my business?”
Guide decision-making without making prescriptive choices - e.g. “What are some common approaches to improving cash flow?”
Take actions at the instruction of the user - e.g. “Transfer £500 to Jerry.”
An example role for our finance assistant AI scenario earlier:
Transaction Explainer: Helps customers understand unexpected charges by retrieving relevant transaction data, analysing influencing factors, and generating a user-friendly explanation.
By assigning clear responsibilities and constraints to each AI role, we prevent the system from overstepping into areas where it lacks authority or accuracy.
Step 3: Define the Attributes That Shape Role Behavior
Once we’ve identified our AI roles, the next step is to shape their behavior so they respond appropriately to different situations. Without clear guidelines for attributes like tone, confidence, response format, and escalation, AI can feel inconsistent—or worse, unreliable.
Think of it like designing a character for a film or a professional role in a workplace: What is their responsibility? How should they communicate? When should they defer to someone else?
Some example attributes:
How should responses be framed? - e.g. tone, confidence, level of detail.
What tasks, modes, or validation steps should be required? - e.g. retrieving relevant data, analysing for patterns, filtering for offensive content.
What should the AI explicitly avoid doing? - e.g. providing financial advice.
What data or tools can it access to add context or take actions? - e.g. user behavior, environmental data, or system inputs.
Some examples for our earlier Transaction Explainer:
Purpose: Retrieve transaction details, analyse influencing factors, and explain why the charge occurred.
Tone and framing: Reassuring, factual, and neutral. Avoid alarmist language.
Confidence handling: If the cause of the charge is unclear, surface additional possible explanations rather than stating a definitive reason.
Response format: Summarised explanation first, with an option to “See More Details.”
Escalation and hand-off: If the user indicates suspicion or confusion, suggest speaking with a fraud specialist.
Guardrails and limits: Never confirm fraud (the AI is not qualified to do that confidently)—only present possible explanations and next steps.
Context signals and data sources: Transaction metadata like merchant details and category, the cardholder details and whether it was paid online or in-person.
Attributes to Define AI Behaviours
There are many ways to describe behaviour and many attributes to choose from. Different products will require different design considerations. The key is to define structure where it matters, ensuring AI responses align with intent, stakes, and user expectations. Here are some examples that I’ve found useful as prompts to start thinking. This isn’t an exhaustive list—but a starting point. Think of it as a menu for our AI kitchen.
Attributes to Consider
Purpose: The core responsibility of this AI role/workflow. Example: Summarising legal documents without providing legal advice.
Response Mode: Should the AI retrieve, recommend, analyse, generate, or execute? Example: Retrieve current market trends, but never predict future prices.
Tone and Framing: How formal, confident, or hedged should responses be? What personality and tone of voice suits the situation or our brand? Example: “I’m not a doctor, but here’s what the research says…”
Confidence Handling: Should the AI disclose certainty levels? Should it refuse to answer when unsure? Example: Show confidence % for medical diagnoses, but only retrieve certified sources.
Response Format: How should information be structured? Bullet points, paragraphs, charts, interactive choices? Example: Financial overviews should always include a summary chart.
Depth of Detail: Should responses be brief or detailed? Should users be able to control this? Example: “Would you like a quick summary or a deep dive?”
Validation and Oversight: What checks, cross-references, or human approvals should be required? Example: Any AI-generated loan approval must be reviewed by a human underwriter.
Escalation and Hand-off: When should the AI defer to a human? Example: If an AI chatbot detects user frustration, escalate to live support.
Optionality and Control: Should users be given choices or a single recommendation? Can they switch between different input types, or presentation formats? Can they edit or tweak responses? Example: “Here are three possible routes—fastest, cheapest, most scenic.”
Timeliness and Recency: Should the response be real-time or historical? Example: Summarise today’s financial trends, but avoid making future predictions.
Guardrails and Limits: What must the AI never do? What checks should be put in place? Example: Never give explicit investment advice—only summarise available data.
Personality and Tone of Voice: How does the AI reflect brand values and user emotions? Example: A finance AI should sound trustworthy and pragmatic—not overly casual or playful.
Context Signals: What environmental, behavioural, or user data should influence this role’s response? Example: A fraud detection AI should adapt responses based on transaction size, location, and time of day.
Data sources: What databases, APIs, or knowledge bases does this role rely on? Example: A regulatory compliance AI should only source information from verified financial regulations and legal frameworks.
Actions and tools: What actions can this AI role take? What tools can it use? Example: A financial budgeting tool can categorise spending but cannot execute transactions without explicit approval.
Designing AI Thought Processes: What Comes Next?
We’re at the beginning of defining what a good AI behavior design process looks like. While we have well-established methodologies for interface design, service design, and user experience, the practice of structuring AI workflows and shaping these thought processes is still emerging.
But we don’t have to start from scratch.
We can borrow from fields that have spent decades shaping decision-making, guiding behavior, and structuring complex systems, in the same way we do for traditional UX.
Behavioural Design and Decision Architecture – When AI steps into decision-making, it shouldn’t overwhelm users with information or back them into a corner. Instead, it should gently guide them toward informed choices—helping them act without stripping away autonomy. This idea isn’t new. Fields like behavioural economics have long studied how small, well-placed nudges shape human decisions.
Service Design and Journey Mapping – Great AI isn’t just an interface—it’s an experience. And like any good service, it needs to be structured around how users actually move through their journey. Service designers have long used blueprints to map out frontstage (user-facing) and backstage (system) interactions. This ensures that everything—from human agents to automated systems—works together seamlessly. Similarly, Jobs-to-Be-Done (JTBD) tells us to stop fixating on features and instead focus on what the user is really trying to accomplish.
Human-Computer Interaction and Explainable AI (XAI) – AI workflows need to be designed to show their reasoning and give users a way to interrogate their responses. Fields like Explainable AI (XAI) have already developed techniques for this. SHAP (Shapley Values) and LIME (Local Interpretable Model-agnostic Explanations) help expose the inner logic behind model predictions, while Google’s Model Cards provide clear documentation of what an AI system can and can’t do.
Systems Thinking and Modular AI Design – In systems thinking, modularity is key to designing scalable, adaptable structures. Similarly, the OODA Loop (Observe, Orient, Decide, Act) teaches us that decision-making systems—whether in military strategy or AI—should continuously observe their environment, assess the context, choose the right action, and iterate based on feedback.
Designing AI behaviours isn’t some entirely foreign discipline—it’s an extension of what designers already do best: structuring interactions, shaping decision-making, and ensuring clarity, control, and trust.
We don’t yet have all the answers, but this is the work ahead of us. This is where AI design is going. And designers need to help shape it.
Wrapping up
The way we think about AI shapes the way we build it. AI isn’t magic—it’s a system we design. The way we structure these systems will define their trustworthiness, their safety, and their impact on the world.
If we think of AI as a single monolithic mind then we end up building one massive model that tries to do everything. The results are unpredictable behaviour, overreach, lack of transparency, and a system that is expensive and hard to scale.
But, if we instead picture our AI system as a network of structured roles and workflows, we can build modular AI components that specialise in different tasks. This approach gives us more predictable behaviour, clear guardrails, explainability, and scalability. It also means we can more easily make changes. Instead of one giant model to retrain, we can tweak paths in workflows or swap out AI components.
If we leave AI as a black box, we get chaos. If we design it with structure, we get clarity, control, and trust.
The best AI products won’t be the ones that simply generate answers. They’ll be the ones that ask the right questions, take the right steps, and deliver responses we can rely on.
What is most exciting about this is that if you break it down, the core challenges of AI behavior design map neatly onto skills designers already bring to the table:
Shaping how people engage with complex systems: We do this every day in UX, service design, and interaction design. AI just extends that into shaping how the system itself thinks and responds.
Helping people make informed choices: Behavioural design has taught us how to guide decision-making without overwhelming or misleading. That’s exactly what AI workflows need.
Structuring information to be clear and useful: Information architecture, journey mapping, content design—these disciplines are all about organising what users see, when, and why. AI responses should be designed with the same care.
Making processes transparent and explainable: Whether it’s a complex checkout flow or an AI decision-making process, users need to understand what’s happening and why in order to build trust.
Ensuring safe, ethical interactions: Good designers already consider unintended consequences and design for inclusion, fairness, and accessibility. AI requires the same rigour.
This means designers aren’t just well-equipped to shape AI behaviours and workflows—we might be the best-equipped to do so.
Because if AI is going to be useful, trustworthy, and human-centred, we don’t just need to design how people access AI.
We need to design AI itself—its workflows, behaviours, and guardrails—so it works with people, for people, in ways that are predictable, appropriate, and aligned with real needs.
That’s the opportunity ahead of us. And it’s one that designers are uniquely prepared to take on.
Signal Path
AI is reshaping the way we design—our products, our tools, our jobs. Signal Path is a weekly exploration of the challenges and opportunities in making AI products intuitive, trustworthy, and a little more human. Written by Andrew Sims, a design leader working in Fintech, it’s for designers, product thinkers, and technologists grappling with AI’s impact on their work and the products they make. I’m not claiming to have all the answers—this is a way to think through ideas, articulate challenges, and learn as I go. Thank you for joining me as I navigate this path.