All templates

AI Bug Bounty: Scope, Fairness & Incentive Evaluation

An internal stakeholder survey evaluating scope clarity, decision fairness, and incentive effectiveness in your AI bug bounty program over the past 6 months to guide program improvements.

Sample questions

A preview of what’s in the template. Every question is editable before you launch.

25 questions · ~4 min
Q01
Long Text

Welcome! This brief survey collects your feedback on fairness, scope clarity, and incentive design in our AI bug bounty program over the past 6 months. Your responses are confidential and will be reported in aggregate only. There are no right or wrong answers—we value your honest perspective. The survey takes approximately 8–10 minutes. Participation is voluntary, and you may stop at any time.

Q02
Multiple Choice

In what ways are you involved with the AI bug bounty program? (Select all that apply.)

Q03
Long Text

How clear are the program's overall objectives to you today?

Q04
Long Text

Thinking about the last 6 months, how fair overall have our bounty decisions been?

Q05
Multiple Choice

Which incentives most motivate high-quality submissions? (Select up to 3.)

Q06
Multiple Choice

Which changes would most improve fairness and clarity in the program? (Select up to 3.)

Q07
Long Text

How likely are you to recommend our AI bug bounty program to an external security researcher?

Q08
Long Text

How long have you worked at this company?

Q09
Long Text

Thank you for completing this survey. Your feedback will directly inform improvements to scope clarity, evaluation fairness, and incentive design in the AI bug bounty program.

Q10
Long Text

How long have you been involved with or aware of the AI bug bounty program?

Q11
Long Text

How clear is the definition of which AI/ML assets and models are in scope?

Q12
Long Text

How consistent have severity classifications been across similar submissions?

Q13
Long Text

Please rank the following success metrics from most to least important for evaluating the program.

Q14
AI Interview

We'd like to explore your feedback in more depth. An AI moderator will ask you a couple of follow-up questions based on your earlier responses about the bug bounty program.

Q15
Long Text

Which region do you primarily work in?

Q16
Long Text

How clear is the definition of which vulnerability types and AI-specific attack vectors qualify?

Q17
Long Text

How fair have payout amounts been relative to the effort and impact of submissions?

Q18
Long Text

Based on your responses in this survey, please share any additional thoughts or suggestions for improving the AI bug bounty program.

Q19
Long Text

Approximately how many bounty reports have you personally reviewed in the last 6 months?

Q20
Long Text

How clear are the severity classification criteria and corresponding payout tiers?

Q21
Long Text

How timely and communicative has the triage process been?

Q22
Long Text

How clear are the out-of-scope exclusions and rules of engagement?

Q23
Long Text

How fairly have duplicate or disputed reports been handled?

Q24
Long Text

If any scope wording feels ambiguous or incomplete, please share specific examples or phrases you would improve.

Q25
Long Text

How adequate are current payout amounts relative to the effort and impact of submissions?

What’s included

  • AI follow-ups

    Adaptive probes on open-ended answers that pull out detail a static form would miss.

  • Attention checks

    Built-in safeguards against rushed answers and low-quality respondents.

  • AI-drafted copy

    Wording, ordering, and branching written by the AI — tuned to your research goal.

  • Auto report

    Themes, quotes, and a plain-English summary write themselves once responses come in.

Ready to launch?

Open this template in the editor. Every part is yours to change before the first respondent sees it.