From Tools to Teammates: Six Mental Models Shaping AI Products
People use mental models to understand products. If you get your model wrong, users get lost.
Software used to be a tool. Now, it’s a presence. A participant. A co-author. A coach. The job of design is no longer to wrap fixed features in usable interfaces—it’s to define the shape of a relationship.
Traditional software is instrumental: it does what you tell it. You click a button, it performs a function. You enter values, it calculates a result. AI upends that. It introduces agency, probability, and contextual interpretation. The product isn’t just responding—it’s interpreting. Suggesting. Steering.
It shifts from being a tool you use to something you work with. We’re witnessing the birth of entirely new product roles—archetypes that reflect the evolving relationship between users and systems:
Editors are becoming co-authors - More than a canvas for text or media; collaborators that suggest, rewrite, and restructure.
Dashboards are becoming coaches - Displaying more than a snapshot of data; highlighting progress, risks and opportunities, analysing and explaining.
Libraries are becoming sense-makers - Not just holding information; but summarising, clustering, prioritising, and surfacing meaning.
Inboxes are becoming guides - Beyond receiving messages and notifications; they’re setting the pace and urgency, managing your next action—even shaping relationships.
But, in transforming their role, these systems can undermine affordances. And without clarifying their purpose, they risk alienating users.
This is one of the central challenges of AI-first design. We’re no longer designing feature sets. We’re designing products to function as partners. We’re no longer designing tools. We’re designing working relationships. And like any relationship, they need clarity. Roles. Boundaries. Expectations.
Giving Software a Form
Technology used to arrive in boxes. It had manuals, buttons, and labels. You knew what it could do by reading the back of the packaging or fiddling with it long enough. The form hinted at the function, and the function mostly stayed the same.
Software has evolved to emulate this form—a button in an app isn’t really a button, it’s just pretending to be. The mental model of the physical device is rendered onto our screens, even if it is constantly shifting and morphing as we move between apps and screens and states.
Now, with AI, the box is gone altogether. In its place is something shapeless, foggy. It doesn’t explain itself. It asks you what you want.
At first, this felt magical—like meeting someone who could do almost anything. But over time, it’s also disorienting. The system is capable, but its contours are undefined. And without contours, we struggle to build intuition. How do we know what it’s good at? Where the edges are?
The first time I used Siri, I wasn’t sure how polite to be.
“Hey... uh, what’s the weather like today?”
It worked. But I remember the hesitation. Not because I didn’t know what Siri could do—but because I didn’t know what kind of thing it was.
Was it a tool? A person? A pet robot? Was I issuing a command or making a request?
This is what happens when a system has capability but no identity. It speaks, but it doesn’t say who it is.
And that’s exactly where many AI-powered products are today: smart, capable, and oddly shapeless. They open with a text box and an open-ended invitation: “Ask me anything.” Which often lands more like “Guess what I do.”
These new systems don’t behave like tools. They feel less like instruments and more like… collaborators. Assistants. Apprentices. Partners. And it’s this shift—from tool to teammate—that demands more from designers. We need to take these nebulous, edgeless things and give them a form that says what they are. Because users don’t need to understand how these systems work. They need to understand what kind of things they are. Is this a colleague or a calculator? A sounding board or a command line? Is it passive, or does it take initiative? Will it always follow the rules, or sometimes surprise me?
Humans don’t need perfect explanations—we just need usable ones. We build internal models to understand tools and systems:
A microwave heats evenly but has hotspots.
A GPS reroutes when I go off course.
A calculator always gives me the same answer.
These are mental models, and as AI gets baked into more products—as it begins to drive workflows, automate decisions, or advise users, we need new ones. New shapes for interfaces that help people feel not only informed, but oriented. That help users build intuitive expectations for how the system thinks, acts, and responds.
Choosing the Right Mental Model, Not Just the Right Interface
A mental model is how we understand what a system is and what it’s good at. It’s how we decide whether to trust it. Whether to explore, comply, question, or ignore.
If the model is clear, the product feels usable—even if it’s complex. If the model is vague, even the best AI can feel unpredictable, fragile, or worse.
Good metaphors are scaffolding. They help users build accurate, intuitive expectations. They shape:
What users expect
What they believe is possible
How they recover from confusion
What kind of trust they extend
You’re setting a frame for what kind of relationship the user is entering. A good AI mental model should help users answer:
Who’s in charge? (Do I lead, or does it?)
How much initiative does it take?
Does it always follow instructions, or offer opinions?
How much do I trust it—and how often do I need to check its work?
Is it optimising for speed, accuracy, creativity, control, or delegation?
Our job isn’t to explain AI. It’s to pick the right metaphor for it.
6 Mental Models for AI
Not every AI product is the same kind of partner. Some follow your lead. Some take initiative. Some step in only when invited. Others sit beside you, quietly suggesting better ways forward.
Here are six mental models that show up again and again in AI-native products. Each one sets expectations. Each one defines a different kind of working relationship.
1. The Tool
“You tell it what to do. It does it.”
Simple. Reliable. Obedient. This is the mental model we’ve lived with for decades: calculators, checklists, filters, sort buttons. It does exactly what you ask, no more, no less.
Even in an AI product, this model has a place—especially when users want speed, control, or precision. It’s most useful when outcomes are predictable and repeatable.
But as soon as the product starts showing initiative—offering alternatives, suggesting better paths—it starts to strain against this framing. Because now, it’s no longer just doing. It’s deciding.
User-led
Repeatable, reliable
Best for: Calculators, converters, filters, data input
Interface: Fixed controls, clear feedback
UX risk: If it underplays intelligence or initiative
2. The Teammate
“You’re in this together.”
You’re still in the lead, but the product helps shape the work. It suggests, refines, reacts. Think creative tools, strategy helpers, content builders.
A teammate asks questions. Offers drafts. Helps get unstuck. It assumes some shared context—like a colleague you’ve worked with before.
This model works best when the user is still making key decisions, but wants help navigating ambiguity, exploring options, or thinking faster. It brings opinions to the table, but knows when to defer.
Collaborative, turn-based
Can propose, ask clarifying questions
Best for: Content creation, planning, problem-solving
Interface: Side-by-side canvas, editable outputs, draft/refine loops, threaded non-linear paths
UX risk: Ambiguity around boundaries or control
3. The Guide
“It shows the way forward, but you decide.”
Helpful, but not pushy. It understands where you are, where you’re trying to go, and what the terrain looks like. This is the ideal model for planning tools, onboarding flows, or anything that offers structured advice.
A Guide surfaces risks. Highlights blind spots. Suggests routes. It doesn’t act on your behalf—but it gives you confidence to act yourself.
It earns trust not through charm, but through credibility and clarity. The more transparent the thinking, the stronger the relationship.
Offers suggestions, advice, next steps
Remembers goals, adapts to user
Best for: Financial tools, learning platforms, onboarding
Interface: Smart prompts, goal-oriented flows, explainable reasoning, task and feedback loops
UX risk: Needs to earn trust and prove credibility
4. The Operator
“It runs tasks on your behalf—with oversight.”
Here, the product becomes a quiet executor. You give it parameters—rules, triggers, conditions—and it handles the rest. Think automation flows, scheduled reports, recurring actions.
What matters here is trust through transparency. The operator should never feel like a black box. You need logs. Feedback. Proof of life. You’re not collaborating—you’re delegating. But that only works if you believe it will do exactly as you asked, and nothing else.
User configures rules, AI executes
Shows logs, progress, and results
Best for: Automation, operations, orchestration
Interface: Command panels, dashboards, stateful views, workflow builders, timely updates and confirmations
UX risk: Needs transparency, guardrails, and override paths
5. The Companion
“It learns with you, stays in the background, nudges gently.”
This is the ambient layer—the product that nudges, reminds, and softly intervenes when you need support. You don’t open it to get work done. It finds you when it matters.
A Companion is subtle. It notices patterns, anticipates needs, adapts over time. Think wellness tools, productivity nudges, or lightweight AI layers that enrich existing flows.
When done well, this model creates stickiness and emotional resonance. But it has to tread carefully—too present, and it becomes clingy. Too invisible, and it loses relevance.
Ambient, supportive
Offers help before you ask
Best for: Wellness, coaching, productivity, safeguarding
Interface: Notifications, light-touch UI, contextual nudges
UX risk: Can feel vague or overly familiar, context sensitive
6. The Librarian
“Ask anything. It finds and summarises.”
Simple, search-oriented, and reassuring. The Librarian doesn’t guess or suggest—it retrieves, compares, and summarises. This model works well for knowledge work, data tools, or customer support.
A good librarian doesn’t just give you facts—it helps you understand. It clusters. It highlights relevance. It shows its sources.
This can be a familiar model for many users—like a smarter search engine. But the more your product shifts from retrieval to recommendation, the more it needs to evolve into something more opinionated—like a Guide or Teammate.
Retrieval-led, low judgement
May cluster, compare, or contextualise
Best for: Knowledge work, search, exploration
Interface: Enhanced search fields, chunked results, follow-ups, annotations
UX risk: Underwhelming if too passive or shallow
Not Just a Metaphor—A Contract
Each mental model comes with its own contract.
A Tool won’t interrupt you.
A Guide won’t act without your input.
A Companion might.
An Operator will—with your blessing.
A Teammate offers, revises, refines.
A Librarian fetches, never freestyles.
If your product breaks that contract—even once—you don’t just confuse the user. You compromise trust.
That’s why choosing the right mental model isn’t just an interface decision. It’s an alignment decision—between behaviour, interface, expectation, and trust.
And once you choose it, everything else follows.
Reinforcing the Mental Model
Once you choose a model—Tool, Teammate, Guide, Operator, Companion, Librarian—everything else flows from it:
Tone and language: Is it warm and supportive? Expert and economical? Quietly observant?
Initiative: Does it wait patiently, or offer help before being asked?
Memory: What does it remember, and how visibly does it show its recall?
UI patterns: Is it a chat interface? A side-by-side canvas? A background process with a visible trail?
Boundaries: Can it act on your behalf? Or is it always waiting for permission?
When these signals align, the experience feels inevitable. When they don’t, the product becomes uncanny—smart, maybe, but hard to trust.
Mental models aren’t theoretical. They live in your interface. And every pattern you choose either reinforces the relationship—or undermines it.
UI Patterns That Make it Real
Mental models aren’t abstract. They show up in layout. In tone. In the timing of a nudge. In whether the system waits—or acts.
When your product picks a role, that decision pulls everything else into alignment. It shapes interaction architecture. It defines trust boundaries. It governs what kinds of surprises are acceptable.
Here’s what that can look like—when the mental model gets expressed as interface.
The Tool
You lead. It executes. No ambiguity.
Uses intelligence under the hood—classification, extraction, formatting, language rewriting—but exposes it as discrete, deterministic functions. The user is fully in control. The system never makes assumptions.
Interaction patterns:
One-click triggers: “Rephrase,” “Tag,” “Sort,” “Summarise”
Inline controls: Drop-downs, toggles, modals with pre-filled options
Stateless interactions: Each one stands alone
Undo + override: Reinforces control and reversibility
Examples in AI-native products:
Rewriting a sentence to match tone or reading level
Applying an AI-generated label to a transaction
Normalising inconsistent vendor names in a ledger
Summarising a document on-demand (not automatically)
Using natural language to filter a list: “Show invoices over $5,000”
Signals to reinforce:
Instant output, no hesitation
Results should be legible and editable
Avoid “helpful” guesses—just do what was asked
Don’t store or act on context beyond the immediate input
The Teammate
The system co-creates, but the user steers.
Contributes ideas, drafts, or alternatives—but never takes the wheel. It’s best suited for open-ended, creative, or strategic work where exploration matters.
Interaction patterns:
Side-by-side editing panels: AI proposes, user refines
Suggested actions: “Improve tone,” “Add a summary,” “Expand this section”
Regeneration cycles: Quick “Try again” or “Refine” flows
Editable drafts: Always modifiable, never final without confirmation
Examples in AI-native products:
Drafting a business update based on recent activity
Helping founders write customer outreach emails
Suggesting alternate budget strategies, side by side
Generating a project plan, with editable milestones
Signals to reinforce:
Make the system’s voice distinct, but not dominant
Keep actions transparent—no surprises
Show history of suggestions or edits for traceability
Let users opt into more help, not be overwhelmed by it
The Guide
Opinionated, but deferential. Knows the terrain, but asks before acting.
Ideal for helping users navigate complexity. It knows the user’s goals, current state, and environment—and can offer intelligent next steps or options. It supports planning and decision-making, not just execution.
Interaction patterns:
Goal-based flows: “Help me manage my cash flow”
Scenario builders: Sliders, toggles, branching logic
Insight panels: Annotated recommendations (“We suggest X because Y”)
Comparisons: Multiple paths shown clearly side-by-side
Examples in AI-native products:
Planning a runway extension strategy
Evaluating cost-saving levers and their impact
Generating “what-if” budget scenarios
Coaching and suggestions to improve update emails
Signals to reinforce:
Clarity of reasoning is critical—show the “why,” not just the “what”
Let users simulate or preview decisions before committing
Use tone that conveys expertise, not authority
Never act on behalf of the user without a confirm step
The Operator
Configured once, acts consistently. Delegation with visibility.
Executes repetitive or rules-based tasks. It shines when the user wants to offload work but still demands control and traceability.
Interaction patterns:
Workflow builders: “If X, then do Y”
Task logs: “What was done, by whom, when”
Approval flows: Triggered by thresholds or flags
Safe defaults + override controls: Guardrails, always an off-switch
Examples in AI-native products:
Reconciling recurring transactions, with audit trail
Auto-tagging expenses over a threshold for review
Triggering alerts when spend patterns change
Executing scheduled payments or reports with validation
Signals to reinforce:
Show system state clearly: What’s in progress, queued, complete
Every action should be reviewable after the fact
Keep tone functional and neutral—this is infrastructure, not conversation
Offer override options without making the user feel exposed
The Companion
Ambient, adaptive, emotionally intelligent.
Stays present in the background, learns over time, and intervenes gently when it spots something worth surfacing.
Interaction patterns:
Subtle UI presence: A sidebar, badge, or tray—not a full takeover
Contextual nudges: “Looks like you missed a repayment reminder”
Micro-suggestions: In context cues or highlights
Passive tracking: Adapts based on time, frequency, or task completion
Examples in AI-native products:
Weekly digests, tailored to what’s changed
Suggesting a new reward or offer based on past redemptions
Reminding a user of an unfinished task at the right time of day
Flagging a passage of text for tone of voice in an email
Signals to reinforce:
Don’t demand interaction—invite it
Let the user dismiss or snooze suggestions without consequence
Be encouraging, not corrective. No scolding. No pressure.
The goal is earned trust over time, not instant engagement
The Librarian
Retrieve, organise, explain—never opine.
Efficient, unbiased, focused on clarity. It helps users explore, find, or compare information—but never editorialises.
Interaction patterns:
Smart search with autocomplete and filters
Chunked answers with expandable context
Source referencing: Annotations and notes, “This came from X, last updated Y”
Follow-up prompts: “Would you like to compare this to last year?”
Examples in AI-native products:
Searching past spend by vendor or category
Summarising support documentation or legal clauses
Highlighting policy changes across documents
Extracting key points from customer feedback transcripts
Signals to reinforce:
Avoid tone shifts—it should feel neutral and consistent
Let users dig deeper at their own pace
Clarify the system’s scope—what it knows, and what it doesn’t
If unsure, say so. Confidence is contextual, not assumed
Blended Models, Blended Patterns
Most real-world AI products don’t commit to a single mental model—and that’s okay. They flex.
A financial platform might be:
A Teammate when helping you write your investor update
An Operator when reconciling expenses at the end of the month
A Guide when helping you explore cost-cutting strategies
And a Companion when nudging you to follow up on a flagged transaction
The key is not to pretend it’s always the same. The key is to signal the shift clearly:
Change the layout
Shift the tone
Adjust the tempo
Use different visual structure, UI components, and tone of voice
Don’t let users fall into a new mode by accident—invite them in, like switching from camera to video. One mental model at a time, clearly marked. Predictable transitions. No secret doors.
Wrapping Up
When we design AI products, we’re not just choosing what the system can do.
We’re choosing how it shows up.
Is it a tool you command?
A teammate who drafts?
A guide who knows the road ahead?
A background operator you trust to run the show?
That choice—the mental model—is foundational. It’s what helps users know what to expect, how to interact, when to trust, and when to take the wheel.
Get the model wrong, and the product feels confusing or overconfident.
Get it right, and the product doesn’t just make sense—it feels inevitable.
So we don’t start with features.
We don’t start with capabilities.
We start with a question:
What kind of relationship are we designing?
And once we know that—everything else starts to fall into place.
When you choose the right mental model:
You don’t need to explain every feature—users already know how to think about it.
You don’t have to over-engineer trust—trust grows naturally when behaviour matches expectations.
You don’t have to make AI feel human—just helpful, reliable, and appropriately present.
When that relationship is clear—when the product knows its role and plays it well—users don’t just understand it. They believe in it.
Signal Path
AI is reshaping the way we design—our products, our tools, our jobs. Signal Path is a weekly exploration of the challenges and opportunities in making AI products intuitive, trustworthy, and a little more human. Written by Andrew Sims, a design leader working in Fintech, it’s for designers, product thinkers, and technologists grappling with AI’s impact on their work and the products they make. I’m not claiming to have all the answers—this is a way to think through ideas, articulate challenges, and learn as I go. Thank you for joining me as I navigate this path.