Welcome! This short survey (about 5–7 minutes) asks about your labeling work in the last 30 days. Please answer based on your experience; there are no right or wrong answers.
In the past 30 days, which tasks have you performed? Select all that apply.
- Labeling/annotation
- Reviewing/QA
How long have you worked on this labeling program?
Overall, how clear were task instructions in the last 30 days?
Briefly describe one unclear or conflicting instruction you encountered.
Max 600 chars
In the last 30 days, how often did instructions change mid-project?
Which bias topics are covered in your current guidelines? Select all that apply.
- Demographic bias (e.g., gender, race, age)
- Domain or jargon bias
- Geographic/vernacular variation
- Label leakage or proxy signals
- Harmful stereotypes and toxicity
- Context/translation bias
In the last 30 days, how often did you encounter biased inputs or labels?
Share one recent example of potential bias and how you handled it.
Max 600 chars
When bias is suspected, how clear is the escalation path?
How clear are the acceptance criteria used for reviewing work?
Which review approach is used most often?
- Blind double review with adjudication
- Spot checks (fixed percentage)
- Heuristic-triggered review (rules-based)
- Peer review within team
- Self-review before submit
- Not sure
Please rate the review feedback you received in the last 30 days.
Attention check: To confirm attention, please select Neutral for this item.
Approximate percent (%) of items returned for rework in the last 30 days.
Rank the top causes of rework you observed (most to least).
Distribute 100 points across your typical weekly time on this program.
Which tooling issues most slowed quality or speed recently? Select all that apply.
- Slow loading or lag
- Limited shortcuts or templates
- Poor diff/compare views
- Unclear error messages
- Hard to flag bias or edge cases
- Limited audit trail/metadata
What single change would most improve clarity, fairness, or QA?
Max 100 chars
What is your primary working region? (optional)
What is your primary working language? (optional)
Total experience in data labeling/annotation
Employment type on this program
Anything else you’d like us to know about clarity, bias, or QA?
Max 600 chars
AI Interview: 2 Follow-up Questions on labeling operations
Thank you for your time—your feedback helps improve clarity, fairness, and quality.