In the last 30 days, have you used any developer tools that enforce content moderation or safety filters?
- Yes, in the last 30 days
- No, not in the last 30 days
Which developer tools with content filters have you used in the last 30 days? Select all that apply.
- AI code assistants (e.g., coding copilots)
- Code hosting/PR checks (e.g., repo content policies)
- Package registries with policy checks (e.g., npm, PyPI)
- Documentation portals or knowledge bases
- Q&A forums or developer communities
- CI/CD or security policy gates
How often did you encounter false positives from these filters in the last 30 days?
For recent false positives, how disruptive were these events?
Briefly describe your most recent false positive in the last 30 days (omit sensitive data).
Max 600 chars
Approximately how many minutes did it take to resolve your most recent false positive?
After the false positive, what did you do? Select all that apply.
- Submitted an appeal or requested a review
- Reworded or reformatted content
- Used a different tool or channel
- Waited and retried later
- Asked a teammate/admin with different access
- Abandoned the task
Rank the top 3 effects you experienced from false positives (drag to order; put top impact first).
Why haven’t you used developer tools with content filters in the last 30 days? Select all that apply.
- None of my current tools apply content filters
- I avoid tools that include filters
- Company policy restricts such tools
- I’m unsure which tools include filters
If your tools introduced content filters, how disruptive do you expect false positives would be?
What informs your expectations? Select all that apply.
- Teammates’ experiences
- Industry news or reports
- Past experiences in other tools
- Social media or forums
- Vendor documentation or release notes
Which trade-off do you prefer for content filters in developer tools?
In your view, what most often causes false positives? Select all that apply.
- Ambiguous or broad policy definitions
- Overly sensitive detection models
- Missing contextual signals (e.g., file type, repo trust)
- Poor or unrepresentative training examples
- Misclassifying code vs. natural language
- Locale or language issues
- Unclear UI messaging or guidance
Allocate 100 points across areas that would most reduce the impact of false positives.
Attention check: To confirm you are paying attention, please select “I am paying attention.”
- I am paying attention
- I prefer not to say
- None of the above
What is your primary role?
- Backend developer
- Frontend developer
- Full-stack developer
- DevOps/SRE
- ML/AI engineer
- Security engineer
- Engineering manager
- QA/Testing
- Other
How many years of professional software development experience do you have?
What is your organization size?
- 1 (just me)
- 2–10
- 11–50
- 51–200
- 201–1,000
- 1,001–5,000
- 5,001+
Where are you primarily located?
- North America
- Europe
- Latin America
- Asia
- Africa
- Oceania
- Prefer not to say
Which languages do you use most often? Select all that apply.
- JavaScript/TypeScript
- Python
- Java/Kotlin
- C/C++
- C
- Go
- Ruby
- Rust
- Swift/Objective-C
- PHP
- SQL
- Other
Welcome! This brief survey asks about your recent experiences with content filters in developer tools. Please answer based on the last 30 days and omit any sensitive data.
Anything else we should know about false positives or content filter design?
Max 600 chars
AI Interview: 2 Follow-up Questions on developer content filters
Thank you for your time—your feedback helps improve developer tools.