Methodology
UserTold.ai captures behavioral signals from real sessions — what users actually did, said, and struggled with — and structures them into machine-readable evidence your agent can reason over.
Signal Anatomy
A signal is a structured observation extracted from a session. Each signal links back to a transcript timestamp and session recording.
| Type | Description |
|---|---|
struggling_moment | User hits friction, fails a task, or expresses confusion. |
desired_outcome | User states what they actually want to accomplish. |
workaround | User invents a substitute behavior to work around a gap. |
context | Background information about the user's environment or habits. |
other | Notable behavioral or emotional signal that doesn't fit the above types. |
Example quotes:
- struggling_moment: "I tried this three times and still can't find billing settings."
- desired_outcome: "I just want to export this to CSV without all these extra steps."
- workaround: "I usually copy it into a spreadsheet and filter it there."
- context: "We run this process every Monday morning before the team standup."
- other: "They paused for 30 seconds and then refreshed the page."
Signal JSON
{
"id": "sig_abc123",
"type": "struggling_moment",
"quote": "I tried this flow three times...",
"confidence": 0.91,
"session_id": "ses_xyz789",
"timestamp_ms": 142300
}
Every signal is typed JSON with confidence scores. Your agent reads these via signals.list (MCP) or usertold signal list --format json (CLI).
Study Modes
Studies define the interview structure. Each study is a sequence of segments, and each segment uses one of three modes:
Talk — Conversational interview. The AI asks questions and follows up. Best for JTBD research and understanding context.
Observe — Screen + voice recording while the user completes a task. The conductor monitors for stuck moments and intervenes if needed.
Speak — AI delivers a scripted message or question out loud. Used for transitions, prompts, and guided interventions.
Study Templates
Three built-in templates get you started:
JTBD — Five talk segments. Generic JTBD interview structure: hiring context, struggling moment, desired outcome, alternatives considered, decision. Works as-is.
Usability — Speak intro + observe task + talk debrief. Requires customization: replace [your task] with the actual task description.
Exploratory — Talk context + observe demo + talk probing. Best for discovery research. Requires replacing [this task] with your focus area.
Create from templates via CLI:
usertold study create <projectId> --type jtbd --format json
The Evidence Chain
Signals flow through a structured pipeline:
Signal → Task → GitHub Issue → Signal Rate Delta
- Signal: A structured observation from a session. Linked to transcript timestamp and session recording.
- Task: A cluster of related signals grouped by theme. Has title, description, signal links.
- GitHub Issue: Created by
tasks.push. Body includes signal quotes, confidence scores, session links. - Signal Rate Delta:
tasks.measurecompares signal rates before and after deployment. Proof your fix worked.
Signals vs. Surveys
| Factor | Surveys | Signals | |-|-| | Data quality | Self-reported, recall bias | Behavioral, in-context, verbatim | | Actionability | "Improve UX" | Specific friction at specific URL | | Agent-readability | Unstructured free text | Typed JSON with confidence scores | | Closure loop | No | Signal rate delta after deployment |
See also
- Core Concepts — the data model behind signals and tasks
- Study Design Guide — apply methodology in practice