Welcome! This brief survey asks about your experimentation practices and confidence in results. It should take about 5–7 minutes. Please answer based on your recent experience.
How often are experiments part of your work?
- Regularly (monthly or more)
- Occasionally (quarterly)
- Rarely (yearly or less)
- Never
Which platforms or approaches do you use for experimentation? Select all that apply.
- In-house experimentation framework
- Feature flag platform (e.g., LaunchDarkly, Flagsmith)
- Third-party A/B tool (e.g., Optimizely, VWO, AB Tasty)
- SQL/notebooks only (no dedicated tool)
- Dashboarding tool (e.g., internal BI)
- None of the above
In the last 3 months, how many experiments did you meaningfully contribute to?
Setting up a standard A/B test is…
Analyzing a completed experiment is…
How are the following guardrails handled today?
Which controls are enforced today? Select all that apply.
- Pre-launch checklist
- Blocking deployment on missing instrumentation
- Automated SRM alerting
- Sequential testing / alpha spending
- Max exposure or blast-radius limits
- Quality gates for key metrics
- Post-experiment QA template
- None of the above
What primary decision rule do you use to call results?
- Fixed p-value threshold (e.g., 0.05)
- Bayesian decision rule
- Business threshold / minimum detectable effect
- Case-by-case judgement
- No standard / not sure
Attention check: To confirm you’re paying attention, please select “Occasionally.”
- Never
- Rarely
- Occasionally
- Often
I trust our experiment results to inform product decisions.
What most undermines trust in results today? Please share specifics.
Max 600 chars
How confident are you acting on an experiment’s outcome, typically?
Rank the biggest blockers to reliable experimentation (top = biggest).
Allocate 100 points across where effort typically goes for one experiment.
What one change would most improve our experimentation workflow?
Max 600 chars
Why don’t you run experiments currently? Select all that apply.
- Not enough traffic to test
- Missing instrumentation / metrics
- Tooling is hard to use
- Unclear process or approvals
- Lack of statistical support
- Feature timelines too tight
- We prioritize other methods (e.g., user research)
What would help you start running experiments confidently?
Max 600 chars
What is your primary role?
- Product manager
- Engineer
- Data scientist / analyst
- Designer / UX
- Growth / marketing
- Other
Which team are you primarily part of?
- Core product
- Platform / infrastructure
- Growth / monetization
- Data / analytics
- Other / cross-functional
How many years have you been involved in running or analyzing experiments?
- Less than 1 year
- 1–2 years
- 3–5 years
- 6–9 years
- 10+ years
Where are you primarily located?
- Americas
- EMEA
- APAC
- Prefer not to say
Approximately how many employees are in your company?
- 1–49
- 50–249
- 250–999
- 1,000–4,999
- 5,000+
- Prefer not to say
Anything else you’d like us to know about your experimentation experience?
Max 600 chars
AI Interview: 2 Follow-up Questions on Experimentation UX & Trust
All set—thanks for sharing your perspective.