A Quick Guide to Better Surveys

Surveys are common currency in data-driven improvement. The allure of surveys is easy to understand. Thanks to mobile technology, surveys are more easily designed and deployed now than ever before. They offer the veneer of scientific objectivity. Results can be auto-generated in colorful graphs and pie charts. However, the apparent ease of survey design does not necessarily mean that surveys are always the best option for gathering data nor does it guarantee that surveys will be well designed to elicit the information you need. Especially in smaller groups, it can be more helpful and efficient just to talk to people.

That said, when well designed, surveys can provide a “map” or “satellite” view of individuals’ perceptions or attitudes on an issue of importance (for more on levels of data, see Safir, 2017; Safir & Dugan, 2021). Perhaps more interestingly, surveys can show you where attitudes vary – and knowing who the outliers are can motivate meaningful “street level” follow-up.

Importantly, the benefits of a survey are only as good as its design. And many, many, many surveys put into the world are very, very, very poorly designed. In this short overview, I offer some guidance on how to avoid some of the most egregious mistakes.[1]

  1. Allow for neutrality.
    The tendency to force respondents to “pick a side” in their response – notably, by offering an even number of response choices – falsely models the way people think about issues. Many times, people feel neutral or somewhere in the middle about an issue and this should be reflected in the data. As a result, offer respondents response scales with an odd number of choices – at least 5 or up to 7.
     
  2. Avoid agree-disagree statements.
    Survey questions that pose a statement and then ask respondents to indicate their level of agreement or disagreement are popular, but they tend to yield bad data. It is preferable to write response anchors that reflect the underlying concept of interest. For example, a question asking about how much people enjoy surveys should emphasize the concept of enjoyment: enjoy not at all, enjoy a little bit, enjoy somewhat, enjoy very much, enjoy tremendously. Although it is more work to write response anchors tailored to each question, this extra step is critical for improving the quality of your results.
     
  3. Ask only one thing in each question.
    So called “double-barreled” questions include multiple concepts in a single question, and they are maddeningly common. For example, the question, “How important is it to you that you get good grades and please your parents?” is actually asking two things: how important it is to get good grades and also how important it is to please one’s parents. Such questions cause confusion since respondents may feel differently about each concept and thus cannot answer it accurately. As a result, it is impossible for the survey designer to know precisely which aspect of the question people are responding to and the data is not trustworthy. Before finalizing your survey, scan questions for conjunctions and edit them out.
     
  4. Allow for non-responses.
    Although tempting, do not require participants to respond to a question unless it is absolutely necessary. Such a requirement can generate a feeling that you, as the survey designer, are prying for information they would rather not disclose and may prompt people to answer inaccurately or to exit the survey altogether. Even with required questions – for example, demographic questions needed to measure variation among different subgroups – include a response option where participants can indicate their preference for privacy (e.g., “prefer not to say”).
     
  5. Add labels for each answer choice.
    Avoid response anchors that only include labels for the extreme answers (e.g., a five-point scale where one extreme is “almost never” and the other extreme is “almost always”). The vagueness of this format leaves the middle of the scale open to interpretation and makes it more difficult to interpret the results. Instead, include descriptions for each response anchor (e.g., almost never, a few times, sometimes, often, almost always).

These five tips represent some of the most common and most problematic design flaws and addressing them will invariably make most surveys better (or at least, less bad). For large-scale surveys, research surveys, or surveys being used in high-stakes decision making, I strongly recommend a more thorough design process aimed at improving survey scales’ validity, such as the one outlined in Gehlbach and Brinkworth (2011).

 

This blog post is also available as a downloadable PDF.

 

[1] This guide is adapted, in part, from Hunter Gehlbach’s (2015) helpful article, “Seven survey sins.” Most of what I know about bad and better survey design, I learned from Hunter. For a more comprehensive treatment of survey design best practices, I strongly recommend reading or skimming Stephanie Stantcheva’s (2022) guide.