All Templates

AI Model Card Usability & Developer Trust Survey

Measures how ML/AI practitioners engage with model cards, evaluate documented limitations, and how documentation quality shapes trust and adoption decisions across deployment contexts.

What's Included

AI-Powered Questions

Intelligent follow-up questions based on responses

Automated Analysis

Real-time sentiment and insight detection

Smart Distribution

Target the right audience automatically

Detailed Reports

Comprehensive insights and recommendations

Template Overview

28

Questions

AI-Powered

Smart Analysis

Ready-to-Use

Launch in Minutes

This professionally designed survey template helps you gather valuable insights with intelligent question flow and automated analysis.

Sample Survey Items

Q1
Chat Message
Welcome! This survey explores your experience with model cards and how model limitations influence your workflow. Your participation is voluntary — you may stop at any time. There are no right or wrong answers; we are interested in your honest opinions. All responses are anonymous and will be reported in aggregate only. Estimated time: 8–10 minutes.
Q2
Multiple Choice
In the past 6 months, how have you worked with ML models? Select all that apply.
  • Implemented or fine-tuned models in code
  • Consumed prebuilt APIs/SDKs
  • Evaluated model performance for a project
  • Selected vendors or models for deployment
  • Wrote or maintained documentation
  • None of the above
Q3
Multiple Choice
How familiar are you with model cards?
  • Very familiar — I regularly read and apply them
  • Somewhat familiar — I've read a few
  • I've heard of model cards but I'm not sure what they include
  • Not familiar — I've never heard of them
Q4
Multiple Choice
Based on what you currently know, which information would you expect to find in a model card? Select all that apply.
  • Intended use and out-of-scope uses
  • Training data sources and collection methods
  • Evaluation metrics and methodology
  • Performance across subgroups or conditions
  • Known limitations and failure modes
  • Safety/ethics considerations
  • Versioning and change history
  • Licensing and usage terms
  • Contact/support information
  • Deployment requirements and constraints
  • I don't know / not sure
  • Other (please specify)
Q5
Chat Message
Quick primer: A model card is a concise report that outlines a model's intended and out-of-scope uses, data provenance, evaluation methods and results (often across subgroups), known limitations and failure modes, and relevant safety/ethical notes. It helps you judge fit and risks before integrating a model. Please keep this definition in mind for the remaining questions.
Q6
Opinion Scale
If a model card is available, how likely are you to read it before using the model?
Range: 1 7
Min: Very unlikelyMid: NeutralMax: Very likely
Q7
Multiple Choice
From the model cards you've reviewed, which elements were commonly included? Select all that apply.
  • Intended use and out-of-scope uses
  • Training data sources and collection methods
  • Evaluation metrics and methodology
  • Performance across subgroups or conditions
  • Known limitations and failure modes
  • Safety/ethics considerations
  • Versioning and change history
  • Licensing and usage terms
  • Contact/support information
  • Deployment requirements and constraints
Q8
Multiple Choice
In the past 3 months, how often did you consult model documentation when integrating models?
  • Every integration
  • Most integrations
  • Sometimes
  • Rarely
  • Never
  • Not applicable — I haven't integrated models recently
Q9
Opinion Scale
In model cards you've used, how easy was it to locate limitations and failure modes?
Range: 1 7
Min: Very difficultMid: NeutralMax: Very easy
Q10
Opinion Scale
In general, how easy do you think it would be to find a model's limitations in a typical model card?
Range: 1 7
Min: Very difficultMid: NeutralMax: Very easy
Q11
Opinion Scale
How confident are you in using a model card to judge a model's suitability for a safety-critical deployment (e.g., healthcare, autonomous systems)?
Range: 1 7
Min: Not at all confidentMid: NeutralMax: Extremely confident
Q12
Opinion Scale
How confident are you in using a model card to judge a model's suitability for a fairness-sensitive application (e.g., hiring, credit scoring)?
Range: 1 7
Min: Not at all confidentMid: NeutralMax: Extremely confident
Q13
Opinion Scale
How confident are you in using a model card to judge a model's suitability for a latency-sensitive production system (e.g., real-time inference)?
Range: 1 7
Min: Not at all confidentMid: NeutralMax: Extremely confident
Q14
Multiple Choice
Have you ever discovered a model limitation that was not documented in its model card?
  • Yes
  • No
  • Not sure
Q15
Long Text
Please briefly describe the undocumented limitation and how you identified it.
Max chars
Q16
Ranking
Rank the following limitation factors from most to least important when selecting a model.
Drag to order (top = most important)
  1. Accuracy on out-of-distribution data
  2. Biased outcomes for specific subgroups
  3. Privacy or data leakage risk
  4. Robustness to adversarial or prompt attacks
  5. Interpretability/traceability gaps
Q17
Multiple Choice
When limitations are unclear or missing, what do you typically do? Select all that apply.
  • Run targeted tests or benchmarks
  • Search issues/forums or community reports
  • Contact provider or open a ticket
  • Read source paper or repository docs
  • Switch to a different model
  • Proceed with extra monitoring/guardrails
  • Defer or block the integration
  • Other (please specify)
Q18
Multiple Choice
Which format would make model limitations most actionable for you?
  • One-page summary with key facts
  • Table with metrics by subgroup
  • Risk checklist with mitigations
  • Traffic-light risk labeling
  • Interactive examples and failure cases
  • Link to detailed paper/appendix
  • Other (please specify)
Q19
Opinion Scale
Overall, how much do you trust model cards to accurately represent a model's capabilities and limitations?
Range: 1 7
Min: Do not trust at allMid: NeutralMax: Trust completely
Q20
Long Text
Based on your responses in this survey, do you have any suggestions to make model cards clearer or more actionable?
Max chars
Q21
AI Interview
We'd like to explore your experiences with model cards a bit further. Our AI moderator will ask a couple of follow-up questions based on your earlier responses.
AI InterviewLength: 2Personality: [Object Object]Mode: Fast
Reference questions: 6
Q22
Dropdown
What is your primary role?
  • Backend/Full-stack Engineer
  • ML/AI Engineer
  • Data Scientist/Analyst
  • Researcher
  • Product Manager
  • SRE/DevOps
  • Security/Privacy Engineer
  • Technical Writer
  • Student
  • Other
Q23
Dropdown
How many years have you worked professionally with ML/AI (in any capacity)?
  • 0–1
  • 2–4
  • 5–7
  • 8–10
  • 11+
  • Prefer not to say
Q24
Dropdown
What is your organization's approximate size (total employees)?
  • 1–10
  • 11–50
  • 51–200
  • 201–1,000
  • 1,001–5,000
  • 5,001+
  • Prefer not to say
Q25
Dropdown
Which region are you primarily based in?
  • Africa
  • Asia
  • Europe
  • Latin America & Caribbean
  • Middle East
  • North America
  • Oceania
  • Prefer not to say
Q26
Multiple Choice
Which programming languages do you primarily use when working with ML models? Select all that apply.
  • Python
  • JavaScript/TypeScript
  • Java
  • C/C++
  • Go
  • Rust
  • R
  • Swift/Kotlin
  • Other
  • Prefer not to say
Q27
Dropdown
Which industry best describes your work context?
  • Technology
  • Finance
  • Healthcare
  • Retail/E-commerce
  • Media/Entertainment
  • Education
  • Government/Nonprofit
  • Other
  • Prefer not to say
Q28
Chat Message
Thank you for your time! Your feedback will help improve how model cards communicate limitations and support better integration decisions.

Frequently Asked Questions

What is QuestionPunk?
QuestionPunk is an AI-powered survey and research platform that turns traditional surveys into adaptive conversations. Describe your research goal and get a complete survey draft, conduct AI-moderated interviews with dynamic follow-ups, detect low-quality responses, and produce insights automatically. It's fast, flexible, and scalable across qualitative and quantitative research.
How do I create my first survey?
Sign up, then choose how to build: describe your research goal and let AI generate a survey, pick a template, or start from scratch. Add question types, set logic, preview, and share.
Can the AI generate a survey from a prompt?
Yes. Describe your research goal in plain language and QuestionPunk drafts a complete survey with appropriate question types, ordering, and AI follow-up logic. You can then customize before publishing.
What question types are available?
QuestionPunk supports a wide range of question types: opinion scale, rating, multiple choice, dropdown, ranking, matrix, constant sum, AI interview (text and audio), long text, short text, email, phone, date, address, website, numeric, audio/video recording, contact form, chat message, conversation reset, button, page breaks, and more.
How do AI interviews work?
AI interviews conduct adaptive conversations with respondents. The AI asks follow-up questions based on what the respondent says, probing for clarity and depth. You control the personality, tone, model (Haiku, Sonnet, or Opus), and question mode (fixed count, AI decides when to stop, or time-based).
Can I test my survey before launching?
Yes. Use synthetic testing to create AI personas and run them through your survey. This helps catch issues with question flow, logic, and wording before real respondents see it.
How many languages are supported?
QuestionPunk supports 142+ languages. Add languages from the survey editor, auto-translate questions, and share language-specific links. AI interviews also adapt to the respondent's language automatically.
How can I share my survey?
Share via a direct link (with optional custom slug), embed on your website (iframe or script), distribute through Prolific for research panels, or generate a QR code for physical distribution.
Can I export survey results?
Yes. Export as CSV (flat or wide layout), Excel (XLSX), or export the survey structure as PDF/Word. Filter by suspicious level, response type, language, or date range before exporting.
Does QuestionPunk detect fraudulent responses?
Yes. Every response is automatically classified with a suspicious level (low/medium/high) based on attention checks, response timing, and behavioral signals. You can filter flagged responses in the Responses tab.
What are the pricing plans?
Basic (Free): 20 responses/month. Business ($50/month or $500/year): 5,000 responses/month with priority support. Enterprise (Custom): unlimited responses, remove branding, custom domain, and dedicated support.
How long does support take to reply?
We reply within 24 hours, often much sooner. Include key details in your message to help us assist you faster.

Ready to Get Started?

Launch your survey in minutes with this pre-built template