Security
UserTold.ai is designed with security at every layer. Here's how we protect your data.
Quick Review Checklist
- Verify BYOK keys are encrypted and scoped per project
- Confirm data deletion paths for sessions and projects
- Review tenant isolation and role-based access controls
- Check webhook signature verification and replay protections
Infrastructure
- Cloudflare Workers — the platform runs on Cloudflare's edge network with built-in DDoS protection, TLS termination, and global distribution
- Cloudflare D1 — database storage with encryption at rest
- No long-lived servers — serverless architecture means no persistent attack surface
Authentication
- Google OAuth — we delegate authentication to Google. We never store passwords.
- JWT tokens — short-lived, signed with RS256. Tokens are bound to specific audiences and issuers.
- HTTP-only cookies — session tokens are stored in HTTP-only, Secure, SameSite=Lax cookies
API Key Security (BYOK)
Your Bring Your Own Key credentials are:
- Encrypted at rest — stored encrypted in our database
- Never logged — API keys are excluded from all logging
- Used in transit only — decrypted only when making API calls to your provider
- Deletable — remove your keys at any time from Project Settings
- Scoped — each key is scoped to a single project
We never share your API keys with third parties. AI inference calls go directly to the provider.
Data Protection
- TLS everywhere — all data in transit is encrypted with TLS 1.3
- Encryption at rest — database and object storage are encrypted
- Data isolation — project data is isolated by tenant. Cross-project access requires explicit membership.
- No general model training — by default, your data is not used to train internal or third-party AI models for generalized purposes
- Operational use only — interview and signal data is used to run, verify, and improve the Service operation, not to train generalized AI models.
AI Model Use Policy
- By default, UserTold does not use Customer Data (recordings, transcripts, notes, signals, or task history) to train internal or third-party AI models.
- We do use de-identified aggregate platform telemetry to refine processing algorithms, ranking logic, and prompts.
- If your organization needs custom terms for AI model training use or other AI-processing terms, contact support@usertold.ai.
Access Control
- Project-scoped permissions — users can only access projects they are members of
- Role-based access — owner, admin, and member roles with different permission levels
- SDK key separation — public keys (
ut_pub_...) have limited permissions (screener access, session creation). They cannot access other projects' data. - Rate limiting — all endpoints are rate-limited to prevent abuse
Webhook Security
- HMAC verification — GitHub and Polar webhooks are verified using HMAC-SHA256 signatures
- Replay protection — webhook payloads include timestamps to prevent replay attacks
Widget Security
- Shadow DOM isolation — the widget renders inside a Shadow DOM, preventing CSS and JavaScript conflicts with host pages
- Origin validation — the widget validates allowed origins before loading
- Interview permissions — the widget requests microphone and screen sharing access for interview capture
Incident Response
If you discover a security vulnerability, please report it to support@usertold.ai. We take all reports seriously and will respond within 48 hours.
Compliance
- GDPR — we support data portability, right to erasure, and provide a Data Processing Agreement on request
- Data residency — data is processed on Cloudflare's global network. Contact us if you have specific residency requirements.
- Enterprise data terms — contact us for enterprise terms covering data processing scope, including any ML training exceptions.
Questions
Security questions? Email us at support@usertold.ai.