Quickstart

Three steps to your first AI interview.

Fast path

1. Authenticate

npm install -g usertold
usertold auth login

2. Init a project and study

usertold init --name "My Product" --format json --yes

3. Embed the widget and activate

Copy the snippet from the screener's Embed Code section into your product, then activate the screener and study from the dashboard or CLI.

That's it — interviews run automatically when users visit your product.


Get from zero to your first AI interview in under 10 minutes.

Prerequisites

1. Create a Project

After signing in, create a project from the dashboard. A project maps to one product or product area you want to research.

2. Configure API Keys (Optional)

UserTold.ai runs interviews and extracts signals using AI inference. You have two options:

Option A: Bring Your Own Keys (BYOK)

  • Add your OpenAI API key in Project Settings
  • Costs billed directly to your OpenAI account
  • Full control, transparent pricing

Option B: Managed Inference

  • Leave API key blank
  • UserTold.ai provides and manages inference
  • Costs billed monthly as part of your service

Your keys are encrypted at rest and never shared. See our Security page for details.

3. Create a Study

A study defines what the AI should explore. Go to Studies and create one.

Pick a type:

  • JTBD — Jobs-to-be-Done discovery (why users hire/fire your product)
  • Usability — Watch users complete tasks, intervene when they struggle
  • Exploration — Open-ended discovery conversations
  • Custom — Define your own script

Add goals (what you want to learn) and segments (phases of the interview). See Studies for configuration details.

4. Create a Screener

A screener qualifies participants before they enter an interview. Go to Screeners and create one.

Add qualification questions — for example:

  • "How often do you use [product]?" (single choice)
  • "What's your role?" (single choice)

Set rules to auto-qualify or disqualify based on answers. Link your study to the screener so qualified participants enter the right interview.

5. Embed the Widget

Add the widget to your product. Open your screener's detail page and copy the embed snippet from the Embed Code section:

<script
  src="https://app.usertold.ai/v1/widget.js"
  data-project-key="ut_pub_YOUR_KEY"
  data-screener-id="your-screener-handle"
></script>

Your project key is in Project Settings under SDK Keys. The screener handle is on the screener detail page — it's the human-readable slug used in URLs and embed code.

The widget appears as a floating button. When users click it, they see the screener, and if qualified, start the interview — all within your product. See Widget Integration for customization options.

6. Run Interviews

Activate your screener and study. Participants who visit your product will see the widget, answer the screener, and if qualified, start an AI-powered interview.

The AI adapts how it engages — from natural conversation to silent observation — based on your study script.

7. Review Results

After each interview completes, UserTold.ai automatically:

  1. Transcribes the recording
  2. Extracts signals (struggling moments, desired outcomes, workarounds, etc.)
  3. Creates and prioritizes tasks
  4. Links signals to tasks as evidence

Check the Sessions page to see transcripts and recordings. Check Signals to see extracted insights. Check Tasks to see prioritized work items.

8. Push to Your Tracker & Measure

When a task is ready, UserTold.ai can push it to your issue tracker:

  1. Go to Project Settings and connect your delivery tracker
  2. Go to the task detail page
  3. Click Implement — creates an issue on your configured tracker

After you ship the fix, re-run interviews. UserTold.ai measures whether the pain rate drops — evidence that your fix actually worked.


For Agents: Non-Interactive Setup

All of the above can be automated via CLI with JSON output — designed for autonomous systems.

export USERTOLD_API_KEY=$(your_api_key)
export ANTHROPIC_API_KEY=$(your_anthropic_key)
export OPENAI_API_KEY=$(your_openai_key)

usertold init \
  --name "My Product" \
  --study-title "User Research Study" \
  --format json \
  --yes

This creates a project, study, screener, and generates a snippet in a single non-interactive command.

MCP Integration

For Claude, Cursor, and other agents with MCP support:

Use a single MCP JSON-RPC endpoint with project-aware tool calls and resource reads:

  • POST /mcp
  • Auth: connect via OAuth (Authorization Code + PKCE). Do not manually provision bearer headers in normal Claude/Cursor setup.
  • Discovery: /.well-known/openid-configuration and /.well-known/oauth-protected-resource
  • Dynamic client registration: not supported (clients must be pre-registered/allowlisted)

Examples:

  • initialize to discover available tools/prompts/resources
  • tools/list and tools/call for actions like tasks.create_from_signals or studies.review_script
  • resources/list and resources/read for project-scoped context (project://... URIs)

MCP is optional—if you already use the dashboard or REST API, continue using those. MCP is meant to complement those flows for teams that want tighter autonomous loops.

See the API Reference for full endpoint documentation.


Next Steps