All templates

Experimentation & A/B Testing Maturity Assessment

Assesses experimentation program maturity across culture, process, tooling, governance, and outcomes. Designed for product, growth, and data teams to benchmark capabilities and identify improvement priorities.

Sample questions

A preview of what’s in the template. Every question is editable before you launch.

32 questions · ~4 min
Q01
Long Text

Welcome to the Experimentation & A/B Testing Maturity Assessment. This survey evaluates how your team and organization approach experimentation — covering process, tooling, governance, and outcomes. Your responses will help benchmark maturity and identify areas for improvement. • Participation is voluntary and you may stop at any time. • There are no right or wrong answers — we are interested in your honest perspective. • All responses are confidential and will be reported only in aggregate. • Estimated completion time: 10–12 minutes. Please proceed to begin.

Q02
Multiple Choice

Which function best describes your primary role?

Q03
Long Text

Over the past 6 months, how would you rate the overall rigor of your team's experiment hypotheses?

Q04
Multiple Choice

Which experimentation tools or platforms does your team currently use? (Select all that apply)

Q05
Long Text

When deciding whether to ship a winning variant, rank these factors by importance to your team (most important first).

Q06
Long Text

Overall, how would you rate the maturity of experimentation in your organization today?

Q07
Long Text

What are the biggest blockers or challenges to effective experimentation in your organization right now?

Q08
Long Text

What is your seniority level?

Q09
Long Text

Thank you for completing the Experimentation Maturity Assessment! Your responses will be analyzed in aggregate to produce benchmarking insights. If you opted in, results will be shared with participants once the analysis is complete. If you have any questions, please contact the research team at the email provided in your invitation.

Q10
Long Text

Approximately how many people on your team are directly involved in experimentation?

Q11
Long Text

Our team documents a clear hypothesis for every experiment before launch.

Q12
Multiple Choice

How are experiment datasets integrated with your analytics and data warehouse?

Q13
Multiple Choice

Which risk controls does your team typically apply to experiments? (Select all that apply)

Q14
Long Text

Typically, how many business days elapse between a test ending and a final decision being made?

Q15
AI Interview

Based on your survey responses, we'd like to explore your experimentation challenges and aspirations in a bit more depth.

Q16
Long Text

Approximately how many employees are in your company?

Q17
Multiple Choice

In the last 90 days, approximately how many experiments did your team launch?

Q18
Long Text

We have a clear prioritization framework for deciding which experiments to run.

Q19
Multiple Choice

Do you have a defined and versioned metrics catalog for experiments?

Q20
Multiple Choice

Is there an experimentation council or governance body at your organization?

Q21
Multiple Choice

In the last 6 months, approximately what share of completed experiments led to a production rollout?

Q22
Long Text

Which industry best describes your organization?

Q23
Multiple Choice

What are the primary objectives your experiments target? (Select up to 5)

Q24
Long Text

Experiment designs and analysis plans are peer-reviewed before launch.

Q25
Multiple Choice

How does your team typically determine sample size and test duration?

Q26
Multiple Choice

Where are experiment plans and results typically documented? (Select all that apply)

Q27
Long Text

Where are you primarily based?

Q28
Long Text

Learnings from experiments are shared broadly and inform future decisions across teams.

Q29
Long Text

How many years have you worked with experimentation or A/B testing?

Q30
Multiple Choice

Which test or study types does your team run regularly? (Select all that apply)

Q31
Long Text

What is the typical runtime for a single experiment, from launch to decision?

Q32
Long Text

Rank the following phases by where your team spends the most effort in a typical experiment (most effort first).

What’s included

  • AI follow-ups

    Adaptive probes on open-ended answers that pull out detail a static form would miss.

  • Attention checks

    Built-in safeguards against rushed answers and low-quality respondents.

  • AI-drafted copy

    Wording, ordering, and branching written by the AI — tuned to your research goal.

  • Auto report

    Themes, quotes, and a plain-English summary write themselves once responses come in.

Ready to launch?

Open this template in the editor. Every part is yours to change before the first respondent sees it.