Case Study · AI Product Literacy
FutureProof / Preamble AI
Preamble
AI.
How AI-native UX research helped an AI policy startup find its story — and ship a product investors would fund.
Client
Preamble AI — AI policy & compliance marketplace for enterprise
Duration
30 days · 3 researchers
§01 · At a Glance
A thirty-day engagement that moved the product from pitch deck to investor term sheet.
Read time · 6 minutes
Published 2026 · FutureProof
Engagement
30
days, end-to-end. From kick-off to final prototype handoff.
Research Depth
10+
enterprise interviews with CIOs, security leaders & compliance officers.
Competitive Scope
15+
rival platforms mapped across safety, governance & AI compliance.
Outcome
03
investor-ready prototypes shipped. Round raised. Product launched.
§02 · The Challenge
A great product nobody could explain.
Preamble had built something genuinely important: a marketplace that lets companies define, deploy, and audit the ethical guardrails around their AI systems. The product worked. The technology was ahead of the market.
The problem was the story. When investors or enterprise buyers sat down with Preamble, they saw a compliance tool — useful, but not exciting. The team knew the platform was more than that, but every attempt to articulate it landed flat. The marketplace looked like every other SaaS product, and the pitch read like legal documentation.
The problem beneath the problem
Traditional UX research wasn't equipped to surface the real issue.
01
Enterprise buyers don't evaluate AI products the way they evaluate SaaS products. The purchase decision isn't about features — it's about whether they can trust the AI system enough to put it near sensitive business decisions.
02
The platform's primary value proposition — accountability, not just capability — was buried inside the product, invisible to anyone who hadn't already bought in.
03
The prototype shown to investors didn't tell the trust story. It showed what the product could do. It didn't show why a buyer could rely on it.
Users don't trust AI products that can't explain their guardrails. Preamble's job wasn't to show investors a marketplace — it was to show them a system of accountability.
FutureProof thesis
§03 · The Approach
Three moves that changed the engagement.
30-day sprint · 3 researchers, moving fast · no decks hiding the work
01
Move · Buyer Research
Re-interview the buyers — as AI buyers, not SaaS buyers.
We ran 10+ structured interviews with CIOs, security leaders, and compliance officers actively deploying AI in the enterprise. The prompts were built to surface how trust gets formed (and broken) during AI procurement — not whether the UI was easy to click.
Trust formation maps JTBD · AI procurement Risk-decision criteria
02
Move · Positioning
Reframe the product from capability to accountability.
We mapped 15+ rival platforms on two axes the market wasn't using yet: visible guardrails and enterprise audit depth. That gave Preamble an uncontested corner — accountability as category — and a new sentence the founder could say out loud without flinching. Three key differentiators emerged: observability, compliance-first design, and developer experience.
Competitive axes Messaging hierarchy Investor narrative arc
03
Move · Prototype
Ship three prototypes. Let the trust story sharpen itself.
Instead of one polished artefact, we shipped three iterative prototypes in sequence — each one tested against real enterprise users before the next was built. The final version surfaces guardrails, audit trails, and policy provenance in the first ten seconds of the experience, not in a settings panel.
Paper prototypes v1–v3 Visible-guardrail UI Investor demo flow
§04 · AI Product Literacy
What a traditional UX researcher would have missed.
Most UX research treats the product as a neutral tool. The questions are about usability: can users find what they need? Is the flow clear? Are the labels legible?
AI products break that model. When users interact with an AI system, they're not just navigating an interface — they're deciding whether to extend trust to something they can't fully see or control. The questions that matter aren't "can you find the button" but "do you believe the system will behave the way it claims to?"
That insight reshaped everything: the interview guide, the competitive framework, the prototype structure, and the positioning. It only becomes available if you already understand how AI systems create (and destroy) user trust — which is not a general UX skill.
"This is what FutureProof brings to AI engagements. Not just research skills — AI product literacy."
§05 · Impact
The product shipped. Preamble raised their round.
Primary outcome
Shipped.
Platform delivered and live. Not a prototype. Not a pilot. A shipped product.
Business outcome
Funded.
Preamble raised their round post-delivery. The trust story we built is now the investor story.
What the engagement delivered
01
A defensible category position — moved Preamble from "AI safety tool" to "enterprise AI trust infrastructure." A more ownable, higher-value corner of the market.
02
Three named differentiators (observability, compliance-first design, developer experience) that gave the sales team something concrete to lead with in enterprise deals.
03
A buyer-driven feature roadmap built from actual CIO and security leader feedback — reducing the risk of building things that don't drive revenue.
04
A go-to-market playbook: messaging frameworks, sales narratives, and talking points ready for founder pitch meetings and enterprise deal conversations.
05
Three investor-ready prototypes, each tested against real users, each sharpening the trust story before the final version was locked.
§06 · Work With FutureProof
Building an AI product? You need someone who understands both sides.
If you're building an AI product and struggling to tell the story — to investors, to users, or to your own team — that's usually a research and positioning problem, not a design problem.
We start with a $500 AI Readiness Audit. Five days. You'll come out with a clear view of where your story breaks down and what it would take to fix it.
← Back to Results