The Challenge
Cortex's AI assistant could take actions on behalf of users—sending emails, scheduling meetings, updating records. But users didn't trust it. They'd check everything twice, defeating the purpose of automation.


The Solution
We introduced an 'action timeline' that showed what the agent did, why, and how to undo it. Instead of asking permission before every action, we made reversibility effortless and history always accessible.

Rules of Thumb
- 1
Trust comes from control, not from explanation. Let users undo quickly rather than approve slowly.
- 2
Show the agent's reasoning only when users ask for it. Forced transparency becomes noise.
- 3
Make the boundary between agent and human action visually obvious.
The Insight
Users don't distrust AI because they don't understand it. They distrust it because they can't predict or correct it. Design for recovery, not for preemptive permission.
Want frameworks you can apply to your own GTM challenges?
Get the GTM Files Pack

