Studies
A study defines how the AI conducts interviews with your participants. It includes research goals, an interview script with segments, and configuration for how the AI should engage.
Creating a Study
Go to Studies → Create Study. Pick a type:
- JTBD — Jobs-to-be-Done discovery. Explores why users hire/fire your product, what they were doing before, and what outcomes they care about.
- Usability — Task-based observation. Gives participants tasks, watches them, and probes when they struggle.
- Exploration — Open-ended conversation. Discovers unknowns without a rigid structure.
- Custom — Build your own methodology.
Script Schema
Every study script must have version: 2, a goals array, and a segments array.
| Field | Required | Type | Notes |
|---|---|---|---|
version | yes | 2 | Must be exactly 2 |
goals | yes | array | [{ id, description }] objects |
segments | yes | array | Segment objects (see below) |
segments[].id | yes | string | Unique within script |
segments[].mode | yes | string | talk | speak | observe |
segments[].title | yes | string | Display label |
segments[].speak_text | yes for speak | string | Spoken text delivered by AI |
segments[].talk | recommended for talk | object | { system_prompt?, goals? } |
segments[].instruction | required for observe | string | Task instruction shown to participant |
segments[].conductor_context | required for observe | string | AI-only context for stuck detection |
At least one of instruction or conductor_context is required for observe segments.
Study Script
The script defines the structure of the interview. It has goals (what you want to learn) and segments (phases of the conversation).
Goals
Goals tell the AI what insights to pursue:
{
"goals": [
{ "id": "g1", "description": "Understand the trigger that made them look for a solution" },
{ "id": "g2", "description": "Identify workarounds they use for scheduling" },
{ "id": "g3", "description": "Learn what would make them switch from their current tool" }
]
}
Good goals are specific and outcome-oriented. Avoid vague goals like "understand the user experience."
Segments
Segments define the phases of the interview and how the AI should engage in each one:
{
"version": 2,
"goals": [
{ "id": "g1", "description": "Understand friction in checkout" }
],
"segments": [
{
"id": "intro",
"mode": "speak",
"title": "Introduction",
"speak_text": "Thanks for joining. I'd like you to complete a purchase — just act naturally."
},
{
"id": "observe-checkout",
"mode": "observe",
"title": "Complete a purchase",
"instruction": "Please complete a purchase of any item.",
"conductor_context": "Participant is completing checkout. Watch for hesitation on payment or shipping steps."
},
{
"id": "debrief",
"mode": "talk",
"title": "Discuss what happened",
"talk": {
"system_prompt": "Ask about moments of hesitation or confusion during checkout.",
"goals": ["g1"]
}
}
]
}
Segment Modes
Each segment has a mode that controls how the AI interacts:
| Mode | Behavior | Best For |
|---|---|---|
talk | Natural voice conversation. AI asks questions, listens, follows up. | Discovery, rapport building, debrief |
speak | AI gives instructions or prompts, then waits for a response. | Directed questions, task setup |
observe | AI watches silently while the participant uses your product. Steps in only when it detects something interesting. | Usability testing, task completion |
The AI can transition between modes within a session — for example, starting in observe mode while the participant tries a task, then switching to talk mode when it notices confusion.
Example Scripts
JTBD Interview
{
"version": 2,
"goals": [
{ "id": "g1", "description": "Understand the trigger event" },
{ "id": "g2", "description": "Map the solution evaluation process" },
{ "id": "g3", "description": "Identify hiring and firing criteria" }
],
"segments": [
{
"id": "rapport",
"mode": "talk",
"title": "Warm-up",
"talk": { "system_prompt": "Build rapport. Ask about their role and recent context.", "goals": [] }
},
{
"id": "trigger",
"mode": "talk",
"title": "What triggered the search",
"talk": { "system_prompt": "Uncover the specific event that caused them to look for a solution.", "goals": ["g1"] }
},
{
"id": "evaluation",
"mode": "talk",
"title": "How they evaluated options",
"talk": { "system_prompt": "Explore what alternatives they considered and why.", "goals": ["g2"] }
},
{
"id": "decision",
"mode": "talk",
"title": "What made them decide",
"talk": { "system_prompt": "Identify the criteria that drove the final choice.", "goals": ["g3"] }
},
{
"id": "wrapup",
"mode": "talk",
"title": "Wrap-up",
"talk": { "system_prompt": "Confirm key insights and thank them.", "goals": [] }
}
]
}
Usability Test
{
"version": 2,
"goals": [
{ "id": "g1", "description": "Identify friction points in checkout" },
{ "id": "g2", "description": "Measure task completion confidence" }
],
"segments": [
{
"id": "intro",
"mode": "speak",
"title": "Explain the task",
"speak_text": "Thanks for joining. I'd like you to complete a purchase on this site. Just act naturally — there are no wrong answers."
},
{
"id": "task",
"mode": "observe",
"title": "Complete checkout",
"instruction": "Please add an item to your cart and complete the checkout process.",
"conductor_context": "Participant is completing checkout. Normal friction: choosing shipping options, entering payment. Flag if they abandon the cart or visibly hesitate for >20s."
},
{
"id": "debrief",
"mode": "talk",
"title": "Discuss experience",
"talk": { "system_prompt": "Ask about moments of hesitation or confusion during checkout.", "goals": ["g1", "g2"] }
}
]
}
Mixed Methodology
{
"version": 2,
"goals": [
{ "id": "g1", "description": "Understand daily workflow" },
{ "id": "g2", "description": "Spot friction in key tasks" }
],
"segments": [
{
"id": "context",
"mode": "talk",
"title": "Background & context",
"talk": { "system_prompt": "Ask about their role, tools they use daily, and a recent example of the target workflow.", "goals": ["g1"] }
},
{
"id": "demo",
"mode": "observe",
"title": "Show me how you do X",
"instruction": "Please walk me through how you normally handle [this task] in your day-to-day work.",
"conductor_context": "Participant is demonstrating their current workflow. Watch for tool-switching, workarounds, or moments of visible friction."
},
{
"id": "probe",
"mode": "talk",
"title": "Why did you do it that way?",
"talk": { "system_prompt": "Probe the specific choices made during the demo. Why that tool? Why that sequence?", "goals": ["g2"] }
},
{
"id": "task",
"mode": "observe",
"title": "Try the new flow",
"instruction": "Now try the same task using [the new feature].",
"conductor_context": "Participant is trying the new feature. Compare to their existing workflow from the demo."
},
{
"id": "compare",
"mode": "talk",
"title": "Compare old vs new",
"talk": { "system_prompt": "Ask how the new flow compares to their current approach. What would they keep or change?", "goals": ["g1", "g2"] }
}
]
}
Linking a Screener
A study can be linked to a screener so that qualified participants automatically enter the right interview. Set the screener_id when creating the study, or link it from the screener settings.
When linked:
- Qualified screener respondents are auto-assigned to the study
- The widget loads the study script after qualification
- Session tracking connects the screener response to the interview
Settings
Additional study configuration:
| Setting | Description |
|---|---|
allowed_selectors | CSS selectors for elements the widget can interact with during observation |
allowed_origins | URL origins where the widget is allowed to run |
Tips
- Start simple. A 3-segment script (intro → core → wrapup) works well for most research.
- Use observation for usability. Don't ask users to describe their experience — watch them have it.
- Keep goals specific. "Understand why users churn" is better than "understand the user."
- Mix modes for depth. Observe first, then talk about what you saw. The combination produces the richest signals.
- Test your script. Run a session yourself before recruiting participants.
See also
- Study Design Guide — proven patterns for writing effective scripts
- Core Concepts — understand the data model