A three-week engagement that found why legal teams were ignoring the AI — and fixed it.
Read time · 6 minutes Published 2026 · FutureProof
Engagement
3 wks
end-to-end. Research, analysis, redesign, and handoff.
User Interviews
07
legal ops, sales, and procurement professionals interviewed in depth.
Competitors Mapped
05+
contract platforms benchmarked on AI transparency and trust signals.
Outcome
01
persona, messaging framework, and redesigned interface delivered.
§02 · The Challenge
A powerful AI layer nobody was using.
DocJuris had built one of the most sophisticated AI-assisted contract management platforms on the market. The product was technically years ahead of the competition. But legal ops teams were working around the AI recommendations rather than with them — doing manually what the system was designed to do for them.
The team needed to understand why. And fixing it required a different kind of research — one built around how people interact with AI systems, not just software.
What the research uncovered
Three failure modes that only surface when you ask the right questions.
01
Users were overriding AI recommendations without understanding why the system flagged them — not because the AI was wrong, but because it gave no context for its confidence level.
02
There was no feedback loop between user behaviour and AI signals in the interface. When users disagreed with the system, the system didn't know — and kept recommending the same things.
03
The messaging led with features, not with the anxiety that actually drives legal ops purchase decisions: compliance risk, missed clauses, and personal liability for bad contracts.
“
Legal professionals are trained to be sceptical of automation. The question isn't "can they find the AI feature" — it's "do they trust it enough to act on it when it matters most?"
FutureProof research thesis
§03 · The Approach
Three moves built around AI trust, not just usability.
We conducted 7 in-depth interviews with legal ops, sales, and procurement professionals — structured specifically around how users interact with AI-assisted tools. The questions weren't "can you complete this task" but "when do you trust the system's suggestion, and when do you override it — and what does that feel like?" That framing produced qualitatively different answers.
AI trust mappingOverride behaviourRisk & anxiety signals
02
Move · Competitive Benchmarking
Map the market on axes nobody was using.
We benchmarked 5+ competing contract management platforms — not on features, but on two dimensions most audits ignore: how they communicate AI involvement to users, and how they handle moments of uncertainty or disagreement between the user and the system. DocJuris had a clear path to own the trust-transparent corner of the market.
AI transparency auditTrust signal comparisonPositioning whitespace
03
Move · Redesign & Messaging
Surface the AI at the moment it matters.
We translated research findings into a redesigned editing experience and a messaging framework for go-to-market teams. The redesign surfaces AI recommendations in context — with confidence signals and plain-language explanations — at the exact moment in the workflow where users face contract risk. The Sarah from Legal Ops persona became the anchor for everything.
Contextual AI signalsSarah personaGTM messaging framework
§04 · AI Research Methodology
Why standard UX research fails for AI products.
Standard UX research asks: can users find what they need? That's the wrong question when AI is involved — and asking it produces the wrong insights.
When users interact with an AI-assisted tool, they're not just navigating an interface. They're constantly making micro-decisions about whether to trust the system's judgement over their own. The failure modes are invisible to standard usability research because they don't show up in task completion rates or click paths.
We restructured the entire interview guide around AI-specific dynamics: trust formation, override behaviour, confidence signals, and the moments where anxiety overrides efficiency. Those questions produced findings that a standard usability audit simply cannot surface.
"If your product has an AI layer, your research methodology needs one too. The insights live in the trust gap — not the UI."
§05 · Impact
A platform people actually trust. And a story that explains why.
Research output
Sarah.
A fully developed Legal Ops persona — goals, anxieties, override triggers, and purchase criteria — built from real interview data, not assumptions.
Design output
Trust.
A redesigned interface that surfaces AI recommendations in context, with confidence signals that reduce override rates and increase adoption.
What the engagement delivered
01
A comprehensive strategy document mapping the competitive landscape and DocJuris's defensible position — built around AI transparency rather than feature parity.
02
The "Sarah from Legal Ops" ideal customer persona with clear goals, anxieties, and decision criteria — built from 7 real interviews, not demographic assumptions.
03
A messaging framework that ties product capabilities directly to the moments legal ops professionals feel most at risk in the contract review process.
04
A redesigned editing interface that surfaces AI recommendations in context — showing the why behind each flag, not just the flag itself.
05
[EDITOR NOTE: Add outcome metric here if available — e.g. adoption rate change, sales impact, or client quote about the redesign.]
§06 · Work With FutureProof
Built an AI feature your users aren't trusting?
Most product teams research their AI features the same way they research everything else. That approach misses the thing that matters most: whether users trust the system enough to act on what it tells them.
We start with a $500 AI Readiness Audit. Five days. You'll come out with a clear view of where your trust gap lives and what it would take to close it.