Your agent doesn't understand humans yet.
UserTold.ai is the research layer — a structured pipeline that turns real user behavior into evidence your agent can act on. We run the interviews, extract the signals, and deliver structured JSON your agent reads through MCP or the CLI.
Pain point evidence card
"I tried this flow three times and still cannot find where to change billing settings."
- Signal type
- Struggling moment
- Confidence
- 0.91
- Action
- Linked to issue #42
- Result
- Re-tested after release
The loop your agent runs
You design
Define your research protocol—what to learn, how to interview, which modes (talk, speak, observe).
We interview
UserTold.ai runs the interview: captures screen + voice, STS model speaks with real users, records everything.
We analyze
Auto-transcribe, extract pain point evidence cards, provide source materials for verification.
You act
Evidence arrives via MCP tool calls or CLI JSON output — pushed to GitHub Issues or Linear. Your agent reads it and decides what to build.
We measure
Re-interview on the same pain point. Compare signal rates. Proof your fix worked.
UserTold.ai CLI — brings human feedback into the agent's decision loop USAGE $ usertold <group> <command> [options] COMMANDS init Interactive project setup wizard project Manage projects (create, list, status, snippet) study Manage studies (create, list, update, export, import) session Manage sessions (list, get, reprocess, transcript) signal Extract and manage signals from sessions task Create and manage evidence-backed issues screener Manage screeners (create, activate, configure) config Configure per-project settings (BYOK keys) setup Provider setup (GitHub) overview Project dashboard overview OPTIONS --format json Machine-readable output for agents --yes Non-interactive mode (no prompts)
Who deploys this
Agents for Founders
Run 10 interviews before your next sprint. Know which pain point to fix, backed by signal rates not gut feel.
Autonomous Product Loops
Your agent calls projects.signal_health, runs interviews overnight, creates GitHub issues by morning.
Engineering Agents
Tasks from evidence not debate. When your agent asks "what should I build?", it has an answer backed by real sessions.
Built for agents. Easy to integrate.
MCP Server
Model Context Protocol — the agent-native interface. Your agent calls tools directly: design studies, trigger sessions, read signals, push tasks. No browser required.
CLI (Non-Interactive)
Scriptable setup and orchestration. Full --json and --yes flags for autonomous pipelines.
REST API
Full API access with JWT auth for embed and programmatic control.
Simple pricing for autonomous systems
Platform
$1 / interview
Prepaid credit packs starting at $10 (10 credits). Interview orchestration, signal extraction, issue creation, impact measurement, and dashboard. All included.
Inference
BYOK
Bring your own OpenAI key. Your keys, your bill, no markup. Inference costs go directly to your provider account.
Trust and control
- You own your data and can delete sessions.
- Keys and provider settings are isolated per project.
- Security, privacy, and terms pages are always linked and current.
FAQ
Prepaid credits at $1 per interview ($10 minimum purchase = 10 credits). Inference costs go to your API provider account via BYOK. Your keys, your bill, no markup.
Your agent deserves better evidence.
Set up a project, embed the screener, run interviews, and push pain points to your tracker in under an hour.