Once you’ve gotten a handle on the first two components of your survey framework, you can begin writing your survey questions. While at first this part may seem pretty straightforward, asking the wrong questions is a great way to derail an otherwise excellent survey plan. Just as building a survey is so much more than emailing a list of questions to your email subscribers, writing said questions involves more than simply beginning a sentence with an interrogative word like who, what, or why and ending it with a question mark.
That’s why in this chapter we’re going to dissect the art of writing survey questions—what makes a good survey question? Which questions get you the most useful feedback without asking too much of the user? And how do you write them?
Before we dive into the specifics, let’s talk a bit more about the power of a well-written question. If you think back to the general education classes you most likely took your freshman year of college, you can probably remember the anxiety you felt upon taking the first exam with a new professor. Having no basis for comparison, you didn’t know if you’d get off easy with mostly multiple choice questions or encounter several dreaded discussion, or worse, essay questions.
The right question gets the right response.Knowing which questions to ask can make or break your survey!
All survey questions have the following characteristics in common. In most cases, there’s no wrong or right way to write your questions—it all depends on the purpose of your survey.
Survey questions can be…
Open-ended questions (also know as Essay or Descriptive Questions) require users to answer in their own words—typically in a text box with a set character limit—meaning they can speak freely about their opinions, experiences, suggestions, and anything else they want to talk about.
They’re good for…
Their drawbacks include…
The alternative to open-ended questions is closed-ended questions (various sub-types, including Multiple-Choice, True/False, Agree/Disagree, and so on). This question type is more likely to be answered because they ask a bit less of users.
They’re good for…
Their drawbacks include…
Because each question type has its limitations, many surveys combine both open-ended and closed-ended questions. Especially with longer surveys, it’s smart to limit the number of open-ended questions asked at any one time in order to minimize the potential for overwhelming respondents while ensuring that relevant feedback is collected.
Furthermore, some people choose to deploy an initial survey that asks open ended questions to a small subset of respondents. Once initial feedback has been gathered, they then write multiple choice questions based on the most common answers received during the open-ended period. This new closed-ended survey is then deployed to the user base as a whole.
Some survey solutions (such as Qualaroo and SurveyMonkey) also offer branching logic, meaning surveys can begin with a multiple choice question that, depending on the response, directs users who require extra space to an open ended form.
One way you’ve most likely encountered this mix of open- and closed-ended questions in the real world is customer satisfaction cards. For example, after your stay at a hotel you might be asked to fill out a card of multiple choice questions about your experience with the hotel’s service, cleanliness, and comfort on the front, allowing the hotel to quickly gauge your satisfaction with these key concerns.
But just in case there’s something else you want to mention—from the mildew in the ice machine to the best biscuits you’ve ever had at the brunch buffet—there’s also a space for you to express yourself more fully on the back of the card.
Unlike with open- and closed-ended questions, the following characteristics definitely have right and wrong options, and not paying attention to which category your questions fall into can make or break your survey.
The former will ensure they are easy to understand and that your responses are relevant and useful, while the latter will leave users confused and render your feedback off-topic and useless.
Direct questions are to the point. They are concise and easy to read, meaning users are less likely to skim or skip over them. This doesn’t necessarily mean they’re short (although a very long question should be double and triple checked for directness), but that the question wastes no time in stating its objective.
For example: Based on today’s visit, how would you rate our support service overall?
An indirect way of asking this same question: Today, many companies base their customer satisfaction on the company’s capability to identify problems and provide support. With that being taken into consideration, how would you rate (Company name)’s support service overall?
Which question would you prefer to be asked? The first is easy to read and understand, while the second discusses superfluous background information before getting to the point—if respondents are even still reading by then. The latter creates cognitive overhead, making it more likely that the user will tune out, randomly select an answer, or potentially skip the question altogether.
Unbiased or Biased
It’s impossible not to have your own hypotheses, opinions, and assumptions, but they have no place in your surveys.
Bringing subjectivity into your survey can unconsciously persuade your respondents to answer how you want them to answer. And while being proven right is awesome, getting feedback that improves your site and results in increased user satisfaction and higher profits is even better.
This is another reason why having multiple internal reviews is so important to building valuable surveys. Different departments and stakeholders can identify potential bias in questions that may skew feedback.
The key here is to use neutral language and avoid leading respondents to a particular answer.
For example: How would you describe the checkout process for your purchase with us today?
An indirect way of asking this same question: Here at (Company’s Name), we have taken crucial steps toward offering you what we believe to be the easiest checkout process on the market. How would you describe our system?
Your choice to use the phrases “crucial steps” and “best on the market” makes the second example far from neutral. This is sometimes referred to as an Anchor Question, in which the customer is pulled into your logic in the first part of the question, and because of this, they answer accordingly. (Source)
Biased questions invalidate your survey results. Unbiased questions provide actionable feedback.
If your user can't possibly say no to the question, then the question BIASED
Clear or Unclear
Your survey is not the time to show off your vocabulary—impressive though it may be. A misunderstood question can mean an irrelevant and useless response. For the sake of your feedback’s validity, the diction, phrases, and lingo in your questions should speak to the lowest possible knowledge and skill level you expect to take your survey.
If you’re surveying here in America, average Americans read at a fifth grade level. (Source)
Here are some common clarity errors to avoid:
For example: Over the past six months, how would you rate our response times?
A double-barreled (thus unclear) version of this same question: Over the past six months, how would you rate our customer service and response times?
While both examples are direct and unbiased, the second question lacks clarity. This is tricky because the two services are closely related, but what if your response time is excellent but the resulting customer service is atrocious? Or it takes you a little too long to respond, but the responses are genuinely helpful?
Lumping them together not only gives you inaccurate data, but it could persuade you to make major changes to your business that simply aren’t necessary. For example, if you got low marks based on this double-barreled question, you might spend time and money retraining your customer service panel, while in actuality your response times just needed to be adjusted. You won’t know what you need to fix unless you separate the two. (Source)
Personal or Invasive
It’s great for your users to feel like they’re more to you than just dollar signs, and personalization—using their first names, sending them coupon codes on their birthdays, asking the right questions to the right people, and so on—is an excellent way to foster this. It’s natural to want to know as much about users as possible—including age, location, race, sex, occupation, and so on—for this reason as well as for demographic analysis and targeted marketing efforts.
But how much is too much? It’s important to realize that at some point users will inevitably become uncomfortable. To avoid this, only ask for “need-to-know” information—that which is directly relevant to the survey goal. If you simply must ask for “nice-to-know” information, at the very least make those questions optional or easy to opt out of answering by providing a “Decline to State” option. (Source)
If personal information is mandatory, expect some users to jump ship! Keep all hands on deck and make sensitive information optional
In addition to the aforementioned characteristics, keep the following tips in mind when writing your survey questions…
Now you know the characteristics that will make or break your survey questions, as well as the feedback that results from them. Not only that, but you now also understand the final piece to the three-part user survey framework—what to ask. Don’t lose sight of these three key elements as we move forward in our discussion of user surveys.
The Beginner’s Guide to Conversion Rate Optimization (CRO) is an in-depth tutorial designed to help you convert more passive website visitors into active users that engage with your content or purchase your products.
With a 30% or higher response rate, every product owner should be asking their customers these questions.
Whether you are developing a new product or have been selling the same one for years, you need user feedback.