Agentic Loops

UserTold.ai is a structured research pipeline. This page documents the full technical flow — from session setup through signal extraction, task creation, and impact measurement.

1. Interview Session

Session modes

Each study defines a sequence of segments, and each segment has a mode:

  • talk — AI-conducted conversational interview. The conductor asks questions, listens, and follows up based on participant responses.
  • speak — AI delivers a scripted message or prompt out loud (TTS). Used for introductions, transitions, and guided interventions.
  • observe — Screen recording + voice capture while the participant completes a task. The conductor monitors for stuck moments and intervenes if needed.

How a session runs

  1. Participant opens the widget (embedded on your product or via a screener link).
  2. Widget connects to the session endpoint. Conductor Durable Object initializes.
  3. Each segment runs in sequence: mode switches, prompts fire, screen/audio records.
  4. For observe segments, the stuck detection loop evaluates participant state on a timer — combining deterministic checks (timeout, URL change, goal completion) with an LLM call when needed.
  5. Session ends when all segments complete or participant exits.
  6. Session record is written with status completed.

STS model

Voice sessions use OpenAI Realtime (GPT-4o Realtime) for speech-to-speech. The system prompt is constructed per-segment from the study script. The model speaks with the participant, captures audio, and the transcript is assembled from message events.

Recording

Audio chunks stream from the browser during the session. Screen recordings are captured and merged server-side after completion. Both are stored in R2 with signed URL access.


2. Signal Extraction

After a session completes, a queue message (signals.extract) triggers the extraction pipeline.

Extraction steps

  1. Transcription — Audio is sent to OpenAI Whisper. Transcript returned as timed segments.
  2. Enriched timeline — Transcript segments are merged with session events (mode changes, navigation, conductor interventions) into a unified enriched timeline.
  3. Signal extraction — Claude analyzes the transcript against the study protocol. For each signal found, it outputs:
    • type — one of struggling_moment, desired_outcome, workaround, context, other
    • quote — verbatim text from the transcript
    • confidence — float 0–1
    • timestamp_ms — position in the session recording
    • url — page URL at the time (for observe segments)

Signal types

TypeDefinition
struggling_momentUser hits friction, fails a task, or expresses confusion
desired_outcomeUser states what they want to accomplish
workaroundUser invents a substitute behavior to work around a gap
contextBackground about the user's environment or habits
otherNotable behavioral or emotional signal that doesn't fit the above

Signal JSON shape

{
  "id": "sig_abc123",
  "type": "struggling_moment",
  "quote": "I tried this flow three times and still cannot find where to change billing settings.",
  "confidence": 0.91,
  "session_id": "ses_xyz789",
  "timestamp_ms": 142300,
  "url": "/checkout/step-3"
}

3. Task Creation

Tasks are evidence-backed work items created from signal clusters.

From signals to tasks

usertold task create-from-signals <projectRef> --title "Improve onboarding" --signals sig_1,sig_2 --json

Or via MCP:

tasks.create_from_signals

The task creation process:

  1. Groups related signals by theme (semantic clustering).
  2. Generates a title and description grounded in the signal quotes.
  3. Links back to each contributing signal (and its session).
  4. Stores the task with status: pending.

Task → Tracker Issue

usertold task push <projectRef> <taskId> --json

Or via MCP:

tasks.push

The push process:

  1. Authenticates with the configured provider.
  2. Creates an issue with a structured body: title, description, signal quotes, confidence scores, session recording links.
  3. Stores the issue URL back on the task record.
  4. Returns the issue URL in the JSON response.

4. Impact Measurement

After your agent ships a fix, run follow-up interviews on the same topic.

How measurement works

  1. Create a follow-up study targeting the same pain area.
  2. Run new sessions. New signals are extracted.
  3. Call tasks.measure (MCP) or usertold task measure <projectRef> <taskId>.
  4. The system computes:
    • Baseline signal rate — struggling_moment rate from sessions before the fix
    • Current signal rate — struggling_moment rate from sessions after the fix
    • Delta — percentage reduction

If delta is negative (signal rate dropped), the fix worked. If positive or flat, the problem persists.


5. Full Data Model

Key entities and their relationships:

Project → has many Studies, Sessions, Signals, Tasks

Study → defines the interview protocol (segments, modes, script). Has a template (jtbd, usability, exploration).

Session → one participant interview. Status: pending, in_progress, completed, failed. Linked to a Study.

Signal → extracted observation from a Session. Linked to Session, timestamped, typed.

Task → evidence-backed work item. Linked to one or more Signals. Can be pushed to GitHub Issues or Linear.

ScreenerLink → public entry point that routes participants into a Study.


CLI Quick Reference

# Setup
usertold init --org acme --json --yes

# Studies
usertold study list <projectRef> --json
usertold study create <projectRef> --title "JTBD study" --type jtbd --activate --json

# Sessions
usertold session list <projectRef> --json
usertold session reprocess <projectRef> <sessionId>

# Signals
usertold signal list <projectRef> --json

# Tasks
usertold task create-from-signals <projectRef> --title "Improve onboarding" --signals sig_1,sig_2 --json
usertold task push <projectRef> <taskId> --json

MCP Tool Reference

Base endpoint: POST https://mcp.usertold.ai/mcp

Tool domains: studies.*, sessions.*, signals.*, tasks.*, projects.*

Key tools:

  • projects.signal_health — Get signal distribution and planning readiness
  • studies.create — Create a study idempotently from a script
  • signals.list — List signals with filters (type, session, confidence threshold)
  • tasks.create_from_signals — Generate tasks from signal clusters
  • tasks.push — Push task to GitHub Issues or Linear
  • tasks.measure — Compare signal rates before/after deployment

Get started · For agents · CLI reference