Core Concepts

UserTold.ai is an agentic user research platform. It conducts AI interviews, extracts pain point evidence, prioritizes work, pushes issues to your tracker — then re-interviews to measure if the pain dropped. Evidence-driven development, closed loop.

The Loop

Most research ends with a report. UserTold.ai closes the loop:

  1. Interview — AI talks to your users where they already are (embedded in your product)
  2. Extract — Pain points are automatically pulled from every conversation as structured evidence
  3. Prioritize — Pain points cluster into issues, ranked by frequency and intensity
  4. Act — Issues appear in GitHub Issues or Linear with user quotes as evidence
  5. Measure — Re-interview to see if the pain rate drops after you ship your fix

Each piece feeds the next. No manual transcription, no lost insights, no guessing if your fix worked.


Projects

A project maps to one product (or product area) you want to research. Everything lives under a project: sessions, signals, tasks, screeners, and studies.

Each project has:

  • A public key (ut_pub_...) for embedding the widget
  • A secret key (ut_sec_...) for server-side API access
  • An optional tracker integration (GitHub or Linear) for pushing tasks into the delivery loop

Screeners

A screener is a qualification funnel. Before someone enters an interview, the screener asks a few questions to make sure they're the right participant.

Screeners support:

  • Multiple question types (text, choice, number, rating)
  • Qualification rules (automatically qualify or disqualify based on answers)
  • Capacity limits (stop after N qualified participants)
  • Consent collection
  • Custom branding (color, welcome message, thank-you message)

When a participant qualifies, a session is automatically created and the interview begins.

Studies

A study is the interview script — what the AI should explore with participants. Studies define:

  • Goals — what you want to learn (e.g., "Understand why users abandon checkout")
  • Segments — phases of the interview, each with a different interaction style
  • Type — the research methodology (JTBD, usability, exploration, or custom)

Conductor Modes

Each segment runs in one of three modes. The conductor can also escalate between modes mid-segment when it detects the participant needs more (or less) guidance:

ModeBehaviorBest For
Talk (talk)Full-duplex voice conversation via OpenAI Realtime (WebRTC). The AI asks questions, listens, and follows up in real time.Deep discovery, probing
Speak (speak)The AI delivers spoken guidance via STS TTS playback. Participant mic feeds always-on STT transcription.Directed tasks, delivering instructions
Observe (observe)Silent. Text instruction card shown. Mic still streams to STT so the conductor hears think-aloud.Usability testing

The conductor can shift between modes within a single interview.

Sessions

A session is one interview with one participant. It captures:

  • Voice recording (transcribed automatically)
  • Screen recording (optional)
  • User interactions (clicks, navigation)
  • Chat messages
  • The full transcript

Sessions move through states: pendingactivecompleted. After completion, the processing pipeline kicks in automatically.

Signals

A signal is a meaningful insight extracted from a session. The AI analyzes each completed interview and pulls out:

Signal TypeWhat It Means
Struggling MomentThe user hit friction or confusion
Desired OutcomeWhat the user actually wants to achieve
Hiring CriteriaWhy they chose your product (or a competitor)
Firing MomentWhat would make them stop using your product
WorkaroundA hack they use because the product doesn't solve it
Emotional ResponseA strong positive or negative reaction

Each signal is a self-contained evidence card:

  • A direct quote from the participant (the anchor — everything else is context for this)
  • Where it happened — page URL, page title, visible UI element
  • What the user was doing — their goal at that moment and the preceding actions
  • What happened after — did they recover, give up, or find a workaround?
  • A confidence score (how certain the AI is)
  • An intensity score (how strongly expressed)

Signals describe the user's experience — never solutions or implementation direction. They stay in the user's world. Tasks and solutions come later.

Signals are the raw evidence. They feed into tasks.

Tasks

A task is a prioritized work item backed by user evidence. Tasks are created automatically by clustering related signals.

Priority is calculated from:

  • Frequency — how many sessions mention this issue
  • Recency — how recently it was mentioned
  • Intensity — how strongly participants expressed it
  • Breadth — how many different user segments are affected
  • Signal type — firing moments weigh more than workarounds

Tasks move through: backlogreadyin_progressdone

Implementations

When a task is ready, UserTold.ai can generate a spec with:

  • User quotes as proof of the problem
  • Acceptance criteria derived from signal types
  • A measurement plan

The spec is pushed to your issue tracker (GitHub Issues or Linear) as a native issue. When you deploy your fix, UserTold.ai can re-interview to measure whether the pain drops — closing the loop.


How It All Connects

Project
├── Screener (qualification)
│   └── qualifies → Session
├── Study (interview script)
│   └── guides → Session
├── Session (one interview)
│   ├── Signals (extracted insights)
│   │   └── linked to → Task
│   └── Recording, transcript, events
├── Task (prioritized work item)
│   └── Delivery tracker issue (GitHub or Linear)
└── Settings (tracker integration, API keys)

See also

  • Quickstart — zero to first interview in 10 minutes
  • Studies — configure interview scripts