Design the study
Define what to learn, which participants to recruit, and how the interviewer should adapt across talk, speak, and observe modes.
Research operating system for autonomous product teams
Launch interviews inside your product, capture screen and voice, extract reviewable signals, and route evidence-backed work into GitHub, Linear, MCP, or the CLI.
Live interview session
Transcript moment
"I tried this flow three times and still cannot find where to change billing settings."
Extracted signal
Struggling moment with strong confidence and source evidence.
Task payload
{
"title": "Clarify billing settings entry point",
"source": "sig_billing_findability",
"push": "linear",
"measure": true
}How the loop works
The point is not to collect transcripts. The point is to create a repeatable operating loop: capture reality, review evidence, route work, then measure whether the shipped fix changed the user outcome.
Define what to learn, which participants to recruit, and how the interviewer should adapt across talk, speak, and observe modes.
Embed the interviewer inside your product so sessions capture the real screen, voice, transcript, and workflow context.
Convert raw sessions into signals with source quotes, confidence, page paths, and enough context for a human or agent to verify.
Push evidence-backed tasks into GitHub or Linear, or let your agent read the same loop directly through MCP and the CLI.
Re-run the same pain point, compare signal rates, and prove the shipped change improved the experience instead of just sounding right.
$ usertold init --org <your_personal_org_handle> --name "My Product" --format json --yes
$ usertold overview --format json
{
"sessions": 24,
"signals": 61,
"top_problem": "Billing settings findability",
"recommended_action": "Create task from clustered evidence",
"measurement_ready": true
}Concrete proof
UserTold.ai is built so the quote, the source, the confidence, and the resulting work item stay connected. That makes the output useful to both humans and agents instead of collapsing into vague summaries.
Evidence packet
“I expected billing to live under account settings, not workspace settings.”
Session replay, transcript context, and the exact page path all stay attached to the signal so reviewers can validate the interpretation.
Issue routing
Push the evidence into GitHub or Linear with enough structure for an agent to understand why the task exists.
Verification
Re-run the same study after shipping and compare signal frequency instead of relying on narrative updates.
Operator visibility
Studies, sessions, and signals live in one workspace so the whole research loop stays reviewable.
Choose the surface
The same research system can live inside your product for real interviews, in the dashboard for review, and inside agent workflows for routing and orchestration.
Launch an in-product interviewer with the widget and REST API so real users participate in context.
Let your coding or ops agent design studies, trigger sessions, read signals, and create tasks without touching the browser.
Keep the loop closed after shipping by comparing pre-fix and post-fix signal rates inside the same workspace.
Pricing and operating terms
Platform pricing stays predictable. Model pricing stays on your own provider account. No markup, no hidden research package, and no separation between interview capture and evidence workflow.
$1 / interview
Prepaid credit packs starting at $10. Interview orchestration, extraction, routing, dashboard review, and measurement are included.
BYOK
Bring your own OpenAI key. Your provider account handles inference cost directly, so you keep billing visibility and control.
Prepaid credits at $1 per interview, starting with a $10 purchase. Inference costs stay on your provider account through BYOK.
Start the loop
Set up a project, embed the interviewer, review the extracted signals, and route a real pain point into work in under an hour.