All Case Studies

Product Design

Designing Trust into Agent Interfaces

Rebuilding a conversational AI product to make agent actions visible, reversible, and human-controlled.

Client: Cortex
Type: Product Design
Designing Trust into Agent Interfaces

The Challenge

Cortex's AI assistant could take actions on behalf of users—sending emails, scheduling meetings, updating records. But users didn't trust it. They'd check everything twice, defeating the purpose of automation.

Designing Trust into Agent Interfaces - 1
Designing Trust into Agent Interfaces - 2

The Solution

We introduced an 'action timeline' that showed what the agent did, why, and how to undo it. Instead of asking permission before every action, we made reversibility effortless and history always accessible.

Designing Trust into Agent Interfaces - 3

Rules of Thumb

  • 1

    Trust comes from control, not from explanation. Let users undo quickly rather than approve slowly.

  • 2

    Show the agent's reasoning only when users ask for it. Forced transparency becomes noise.

  • 3

    Make the boundary between agent and human action visually obvious.

The Insight

Users don't distrust AI because they don't understand it. They distrust it because they can't predict or correct it. Design for recovery, not for preemptive permission.

Want frameworks you can apply to your own GTM challenges?

Get the GTM Files Pack

Stay in the loop

New case studies and GTM patterns, occasionally.