All templates

AI Model Card Usability & Developer Trust Survey

Measures how ML/AI practitioners engage with model cards, evaluate documented limitations, and how documentation quality shapes trust and adoption decisions across deployment contexts.

Sample questions

A preview of what’s in the template. Every question is editable before you launch.

28 questions · ~4 min
Q01
Long Text

Welcome! This survey explores your experience with model cards and how model limitations influence your workflow. Your participation is voluntary — you may stop at any time. There are no right or wrong answers; we are interested in your honest opinions. All responses are anonymous and will be reported in aggregate only. Estimated time: 8–10 minutes.

Q02
Multiple Choice

In the past 6 months, how have you worked with ML models? Select all that apply.

Q03
Multiple Choice

How familiar are you with model cards?

Q04
Long Text

If a model card is available, how likely are you to read it before using the model?

Q05
Long Text

In model cards you've used, how easy was it to locate limitations and failure modes?

Q06
Multiple Choice

When limitations are unclear or missing, what do you typically do? Select all that apply.

Q07
Long Text

Based on your responses in this survey, do you have any suggestions to make model cards clearer or more actionable?

Q08
Long Text

What is your primary role?

Q09
Long Text

Thank you for your time! Your feedback will help improve how model cards communicate limitations and support better integration decisions.

Q10
Multiple Choice

Based on what you currently know, which information would you expect to find in a model card? Select all that apply.

Q11
Multiple Choice

From the model cards you've reviewed, which elements were commonly included? Select all that apply.

Q12
Long Text

In general, how easy do you think it would be to find a model's limitations in a typical model card?

Q13
Multiple Choice

Which format would make model limitations most actionable for you?

Q14
AI Interview

We'd like to explore your experiences with model cards a bit further. Our AI moderator will ask a couple of follow-up questions based on your earlier responses.

Q15
Long Text

How many years have you worked professionally with ML/AI (in any capacity)?

Q16
Long Text

Quick primer: A model card is a concise report that outlines a model's intended and out-of-scope uses, data provenance, evaluation methods and results (often across subgroups), known limitations and failure modes, and relevant safety/ethical notes. It helps you judge fit and risks before integrating a model. Please keep this definition in mind for the remaining questions.

Q17
Multiple Choice

In the past 3 months, how often did you consult model documentation when integrating models?

Q18
Long Text

How confident are you in using a model card to judge a model's suitability for a safety-critical deployment (e.g., healthcare, autonomous systems)?

Q19
Long Text

Overall, how much do you trust model cards to accurately represent a model's capabilities and limitations?

Q20
Long Text

What is your organization's approximate size (total employees)?

Q21
Long Text

How confident are you in using a model card to judge a model's suitability for a fairness-sensitive application (e.g., hiring, credit scoring)?

Q22
Long Text

Which region are you primarily based in?

Q23
Long Text

How confident are you in using a model card to judge a model's suitability for a latency-sensitive production system (e.g., real-time inference)?

Q24
Multiple Choice

Which programming languages do you primarily use when working with ML models? Select all that apply.

Q25
Multiple Choice

Have you ever discovered a model limitation that was not documented in its model card?

Q26
Long Text

Which industry best describes your work context?

Q27
Long Text

Please briefly describe the undocumented limitation and how you identified it.

Q28
Long Text

Rank the following limitation factors from most to least important when selecting a model.

What’s included

  • AI follow-ups

    Adaptive probes on open-ended answers that pull out detail a static form would miss.

  • Attention checks

    Built-in safeguards against rushed answers and low-quality respondents.

  • AI-drafted copy

    Wording, ordering, and branching written by the AI — tuned to your research goal.

  • Auto report

    Themes, quotes, and a plain-English summary write themselves once responses come in.

Ready to launch?

Open this template in the editor. Every part is yours to change before the first respondent sees it.