Which area best describes your primary role?
- Back-end engineering
- Front-end engineering
- Full-stack
- Machine learning / data science
- DevOps / SRE
- Security engineering
- QA / Test automation
Do you currently build or maintain features that integrate LLMs?
- Yes — I currently build/maintain LLM-integrated features
- Not yet — planning to integrate in the next 6 months
- No — not working with LLMs
Which threat vectors do you consider when designing LLM features? Select all that apply.
- Prompt injection (malicious instructions)
- Indirect prompt injection via external content sources
- Jailbreaks/model override of system policies
- Data exfiltration or leakage via prompts/responses
- Tool misuse or over-permissioned actions
- Training data poisoning
- Prompt leakage or version exposure
For each mitigation, indicate current adoption status.
How confident are you in your understanding of prompt injection risks?
In the last 6 months, have you encountered suspected prompt injection or jailbreak activity in your systems?
- Yes — confirmed incident
- Possibly — suspicious behavior, not confirmed
- No
Briefly describe what happened and how it was handled. Please omit sensitive data.
Max 600 chars
Attention check: To confirm attention, please select “Not sure.”
Overall, how aware are you of prompt injection as a security threat?
Rank the following by your current level of concern (top = highest concern).
Where have you learned about prompt injection and mitigations? Select all that apply.
How often do you evaluate for prompt injection or jailbreak risks?
- Weekly
- Every 2–3 weeks
- Monthly
- Quarterly
- Less often or never
Which methods do you use to test for prompt injection weaknesses? Select all that apply.
- Adversarial red teaming by engineers
- Automated evaluation suites/checklists
- Unit or integration tests for prompts
- Canary/honeytoken detection
- Shadow deployment with monitoring/alerts
- External penetration testing
- We do not currently test for this
What is your default response policy when LLM input may be unsafe?
- Allow but sanitize/validate content
- Block and ask for clarification
- Escalate to human review
- Varies by context
- Not sure
Which types of tools or platforms do you use for mitigation? Select all that apply.
Allocate 100 points to the metrics you prioritize when mitigating prompt injection. Total must equal 100.
What is your biggest obstacle to managing prompt injection risk today?
Max 100 chars
Total years of professional software experience
- Less than 1 year
- 1–3 years
- 4–6 years
- 7–10 years
- More than 10 years
Organization size (employees)
- 1–10
- 11–50
- 51–200
- 201–1,000
- 1,001–5,000
- 5,001+
Where are you primarily located?
- Africa
- Asia
- Europe
- Latin America
- Middle East
- North America
- Oceania
Primary industry
- Technology/software
- Finance
- Healthcare
- Retail/e-commerce
- Manufacturing
- Education
- Government/nonprofit
- Other
Any additional feedback or context you’d like to share? Optional.
Max 600 chars
AI Interview: 2 Follow-up Questions on Prompt Injection Practices
Thanks for participating! Your responses help improve secure LLM development.
Welcome! This survey asks about your work context, your awareness of prompt injection risks, and the safeguards you use. It should take about 6–8 minutes.