Beta version — active testing in progress

Research operating system for autonomous product teams

UserTold.ai turns real user interviews into evidence your agent can ship against.

Launch interviews inside your product, capture screen and voice, extract reviewable signals, and route evidence-backed work into GitHub, Linear, MCP, or the CLI.

Start Interviewing
$1 per interviewBYOK inferenceGitHub, Linear, MCP, CLI

Live interview session

One conversation becomes one shipping decision

Completed and extracted

Transcript moment

"I tried this flow three times and still cannot find where to change billing settings."

/settings/billing00:42

Extracted signal

Struggling moment with strong confidence and source evidence.

TypeStruggling moment
Confidence0.91

Task payload

{
  "title": "Clarify billing settings entry point",
  "source": "sig_billing_findability",
  "push": "linear",
  "measure": true
}
After releaseSignal rate drops from 42% to 9%

Same path. Same study. Clear proof that the fix worked.

How the loop works

From interview capture to measured product change

The point is not to collect transcripts. The point is to create a repeatable operating loop: capture reality, review evidence, route work, then measure whether the shipped fix changed the user outcome.

01

Design the study

Define what to learn, which participants to recruit, and how the interviewer should adapt across talk, speak, and observe modes.

02

Run the interview

Embed the interviewer inside your product so sessions capture the real screen, voice, transcript, and workflow context.

03

Extract reviewable evidence

Convert raw sessions into signals with source quotes, confidence, page paths, and enough context for a human or agent to verify.

04

Route into work

Push evidence-backed tasks into GitHub or Linear, or let your agent read the same loop directly through MCP and the CLI.

05

Measure the fix

Re-run the same pain point, compare signal rates, and prove the shipped change improved the experience instead of just sounding right.

$ usertold auth whoami --json
$ usertold init --org <your_personal_org_handle> --name "My Product" --format json --yes

$ usertold overview --format json
{
  "sessions": 24,
  "signals": 61,
  "top_problem": "Billing settings findability",
  "recommended_action": "Create task from clustered evidence",
  "measurement_ready": true
}

Concrete proof

Review the evidence before your agent writes the task

UserTold.ai is built so the quote, the source, the confidence, and the resulting work item stay connected. That makes the output useful to both humans and agents instead of collapsing into vague summaries.

Evidence packet

“I expected billing to live under account settings, not workspace settings.”

Session replay, transcript context, and the exact page path all stay attached to the signal so reviewers can validate the interpretation.

Session replayPage pathBehavioral contextTask link

Issue routing

Push the evidence into GitHub or Linear with enough structure for an agent to understand why the task exists.

Verification

Re-run the same study after shipping and compare signal frequency instead of relying on narrative updates.

Operator visibility

Studies, sessions, and signals live in one workspace so the whole research loop stays reviewable.

Choose the surface

Use UserTold.ai from the product, the dashboard, or the agent loop

The same research system can live inside your product for real interviews, in the dashboard for review, and inside agent workflows for routing and orchestration.

Embed path

Launch an in-product interviewer with the widget and REST API so real users participate in context.

  • Widget embed
  • Screen + voice capture
  • Study + screener control

Agent path

Let your coding or ops agent design studies, trigger sessions, read signals, and create tasks without touching the browser.

  • MCP tools
  • CLI --json output
  • GitHub + Linear routing

Measurement path

Keep the loop closed after shipping by comparing pre-fix and post-fix signal rates inside the same workspace.

  • Signal health tracking
  • Replayable evidence
  • Impact verification

You own the data. Keys stay isolated per project. Security, privacy, and operating terms remain first-class.

Pricing and operating terms

Simple pricing for autonomous systems

Platform pricing stays predictable. Model pricing stays on your own provider account. No markup, no hidden research package, and no separation between interview capture and evidence workflow.

Platform

$1 / interview

Prepaid credit packs starting at $10. Interview orchestration, extraction, routing, dashboard review, and measurement are included.

Inference

BYOK

Bring your own OpenAI key. Your provider account handles inference cost directly, so you keep billing visibility and control.

Common questions

Prepaid credits at $1 per interview, starting with a $10 purchase. Inference costs stay on your provider account through BYOK.

Start the loop

Your agent deserves evidence, not guesses.

Set up a project, embed the interviewer, review the extracted signals, and route a real pain point into work in under an hour.