Whether you are developing a new product or have been selling the same one for years, you need user feedback. Feedback is foundational to understanding and optimizing user experience and having a strong user experience can impact your relationship with customers, long-term viability as a company, and your bottom line.
While collecting user feedback may not be new, the channels available to us where we can collect it have certainly expanded. User research is no longer relegated to field observation or even research labs. Like so much of the rest of doing business, the collection of feedback has also expanded to digital channels.
Regardless of what kind of digital product you are looking for input about, mastering the art of feedback collection can help your team meet business goals, and ultimately appreciate your input as a team member.
"User feedback in customer-centric companies is the fuel that drives every internal working part. Every process and every business decision is powered by a deep understanding of who the user is, what it is they're needing to do and ensuring they can do it successfully."
That being said, there are many types of feedback to collect and even more tools you can use to go about collecting it. In this guide, we’ll lay out the importance of collecting feedback, and give you guiding questions to help you determine your feedback strategy. Answer these questions to tackle your collection of user feedback and make a real impact on your user experience and bottom line.
Feedback is a crucial part of user experience research, without it, things can easily run amuck. Here are just a few of the ways collecting feedback can benefit you:
This is the most important benefit of collecting feedback. Although you can probably think of shiny, new things to add to your product or service all the time, you are not your user. To build something that is actually valuable for your audience, it is essential that you understand them. Understanding your users at this stage of product development means having a sense of their needs, fears, limitations, motivations and more. All of this information should help you piece together an image of how they usually deal with the problem you want to solve for them. Of course, this requires speaking with your users and keeping what they actually value in mind.
"High-quality user research should inspire great designs. It gives us confidence that we're building (and have built) the right things at the right times and in the right way. It sets us on the right path while saving our engineering organization time and money."
Given how time-consuming and expensive developing a product or feature can be, the real benefit of user feedback is that it will save you time and money down the line as all development will be data-driven. Moreover, getting your product right makes for happy customers, which can significantly add to their lifetime value to your team.
No long-standing product has ever stayed exactly the same over the years. Real innovation comes from constant iteration. Whether that’s perfecting your packaging or finding the best way to describe your product, user insights are essential to that process. You could argue that user feedback is more important in today’s world than it ever has been due to the speed of technological advancements. Every innovation that makes waves changes how we operate within our day to day lives.
How we compute information, how we manipulate objects, how we use software, and our expectations of product experience. UX itself has had to adapt to the new pace of technology, with adapted principles like Lean UX built on agile methodology and the assumption that a product will be continuously iterated on. At this stage you want to expand your understanding of your users beyond their motivations and needs to their behavior. Seek to understand how they use your product. What’s easy for them and what’s not? This information will help you improve user flows and potentially even inspire delight!
One of the challenges of being in UX is figuring out how to make your case to the rest of your team. Advocating for the changes you’d like to see in a product becomes much easier when you have real data to back it up. When user insights are collected the right way, your case is made for you. It’s harder for other departments to disregard your input when it is based on actual data and feedback.
"When collected intentionally, rigorously and consistently, user feedback helps marketing teams deliver the right messages to activate the right audiences; helps product teams prioritize and build the right features and products; and helps sales teams build strategies that convert newly inspired users into loyal, paying customers."
One of the questions we often get is “who should I collect feedback from?” Believe it or not, this question can completely trip up teams who are otherwise on track to gather useful user insights. Teams can get paralyzed and say to themselves, “but I don’t have access to the ideal user group to gather feedback from.” The real secret is that you don’t have to have the perfect or even the typical user to gather meaningful insights.
“if finding the ideal users means you’re going to do less testing, I recommend a different approach: ‘recruit loosely and grade on a curve.’ In other words, try to find users who reflect your audience, but don’t get hung up about it. Instead, loosen up your requirements and then make allowances for the differences between your participants and your audience.”
In fact, Krug even goes on to say that there are a few valid reasons for intentionally testing with users who probably wouldn’t use your product:
"Sometimes users will say misleading things, usually out of fear of saying "the wrong thing." But if you get participants involved in the design process, they then have a sense of ownership. With a sense of ownership comes the confidence to speak more freely and truthfully, and therefore gather more accurate feedback."
When recruiting participants for user feedback, keep in mind how much money you are asking your company to spend or save. Generally speaking, most organizations would appreciate a concerted effort to save money on recruiting research participants. While for some types of research you certainly will need a specific, skilled group of users, this is not true for all studies. Where possible, make allowances for a more accessible group of user participants, allowing you to expand your participants and conduct research more quickly.
This, however, is not to say that you should shun your ideal user. As possible, recruit users who fit your target profile or who would be able to identify any potential pain points or deliver insights you might not otherwise discover. The more specifically you target the right users in the right use case, the more accurate your feedback will generally be. Of course, targeting participants very specifically can be costly and time-intensive. Keep a close eye on the trade-off in time and energy that comes with finding the “exact” right users.
It’s important to think about the user’s relationship with your product or service as well as their demographics when choosing participants. If you are working on a redesign or creating new features for a product, you’ll likely get the most benefit out of chatting with existing customers or people who at least have some familiarity with your product or service. On the other hand, if you are designing something brand new, you will want to test with ideal users as possible.
There are so many different types of feedback. Align your pursuit of feedback to your goals. Do you want to improve your product experience? Make your customers more happy? Do you want to check the overall experience or investigate the specific part of your product? Do you want to find what about your product people like and why or do you want to find out what makes it difficult for them to complete their actions? Do you want to ask about the features or investigate whether the flow of specific use case is smooth? Do you want to check whether the product is usable or rather useful? The goal of your feedback will determine what type of questions you should ask.
Net Promoter Score is one of the most common types of feedback. It consists of a simple question: how likely are you to recommend our product to a friend? While the question is simple, it can derive some interesting insights. People think of things they recommend as an extension of their brand or reputation. If someone recommends a particular product or service, they have to be comfortable with others associating them with it. NPS is essentially a customer satisfaction metric that asks users: are you willing to let others associate this product/company with you? As much as this information is useful and an industry standard, NPS isn’t the most sophisticated form of feedback because without context, the score isn’t very useful. NPS certainly has its place, but we don’t recommend you stop there.
Asking users to identify roadblocks is another common use of feedback. The goal of identifying roadblocks is, of course, to improve product experience. Depending on the product, this can have major impacts. Whether you’re making it faster and easier to book a doctor’s appointment, upload documents, generate reports, register domains, set up chat sequences, send messages, build wireframes, etc. Identifying roadblocks is an important practice that can have major effects. There are a few different ways to phrase these questions.
If you are looking to improve your conversion rate, ask what other competitors your users or site visitors considered. From there, you can ask follow up questions about why they considered other vendors. This can reveal insights about your audience like price sensitivity, deal-breakers, and more.
What type of feedback is helpful may also depend on what phase of the UX process you are in. For example, if you are working on developing a product from scratch, it may resemble a discovery phase rather than verifying specific ideas.
Namely, you are trying to learn mental models related to what you are designing. At the same time, you should explore opinions, beliefs, attitudes, and fears around the subject. This is also essential information for marketing and sales in order to message and promote the product. However, it is arguably the most important for UX professionals because it can truly change the direction of a SaaS product.
On the other hand, if you are working on an existing product and making changes to it, you should tweak your questions accordingly. You are looking for specific ways to improve your product or website, but also want to learn about your audience’s relationship with competitors.
At this phase, push to discover why users make certain decisions by seeking in-depth insights about how they use your product. You want to know what strengths to capitalize on, and what blind-spots you need to address. You’ll want to implement A/B testing at this phase of design as well.
Recommended Read - Best A/B Testing tools.
"At Redfin we have an insatiable curiosity for our users and a desire to have the best user experience in our field. We do this by establishing both business metrics and user experience metrics for every feature we build. We ask questions that help us identify, track and improve our user experience metrics."
Other information that will also be helpful is collecting benchmarks, and one way to do that is from your users. Once you start testing with users, they will often give you examples of what your product or service reminds them of. This information is crucial! Keep track of it as it will begin to paint a picture for you of how your users understand your product as well as what value they see in it. Other areas that you should continuously gather feedback on are: mockups, A/B testing, and on the existing product.
We’ve summarized some potential questions to ask for feedback at four specific stages: before you start building the product, the prototyping stage (gathering feedback from mockups), A/B testing, and continuous feedback collection from the existing product.
To help contextualize the following examples, we’ll be using a specific example of a digital product, a fictional e-learning platform called EScolere.
Note that there may be very similar questions in the prototyping stage as the existing product stage. However, you’ll want to ensure to keep questions at this stage specific to only the aspects of the product or experience you are presenting.
A/B testing is used to see if changes to things like layout, messaging, etc. will make a difference in any metrics. Ask the same questions for different versions of what you are testing.
For this example, we are testing two different layouts for E-Scolere lessons.
Once the product is up and running it’s best practice to constantly collect feedback. As your user’s needs, habits, and ways of using digital products change, your product should be changing with them.
The question of when to collect feedback has a number of implications. There are a couple of angles from which to consider this question: in relation to the product life-cycle and in relation to the user journey. In terms of the product life-cycle, we recommend asking during the ideation phase (discovery phase before you start building your product), prototype phase (based on mockups so you can improve them), validation phase to verify design ideas and A/B test, and finally at the iteration phase to continuously keep the product usable and delightful.
Trigger behaviors are one of the best ways to decide where to ask for feedback. Here are some classic examples of triggers that can warrant questions:
Another common factor to consider is where to collect feedback. In most companies, there are a number of possible channels where feedback could be collected. Our primary recommendation for this question is to keep in mind that different channels serve different purposes, and of course, target different types of users. Consider the context of feedback you are looking for. Of course, if it is channel-specific, then you know where to begin collecting that feedback. If it is not channel-specific, it is important to keep track of this information and target the right users wherever it makes the most sense.
Here are some potential channels and things to consider related to collecting feedback on them:
If your primary channel is via app, this is where you will want to focus your feedback collection efforts! Alternatively, if your app is auxiliary to your product offering, and you want to increase engagement or downloads, you’ll want to learn from those users who do make use of your app. Keep in mind that this is something that typically takes place behind the login wall.
This is probably the most useful feedback because it comes from your audience, the people who are already engaged with your product as opposed to just loosely interested. Response rates will be higher and responses themselves should be more substantial.
"In-app feedback collection is my preferred channel. With in-app feedback, the user is not required to go out of their way to provide insight."
On-site feedback collection is the most common kind we see. This channel is recommended for understanding barriers to purchasing (think exit intent and abandoned shopping carts), and general information about your users. It may be a good idea to compare answers from your site to in-app answers to the same questions. For example, if you ask both in-app and on-site visitors about their job titles, you can compare to see if your web visitors actually match the demographics of your users. This may lead to some marketing insights.
Consider the context in which your users interact with your site. Are they typically on the move when accessing their account or researching your product? If so, you will want to capitalize on this by collecting feedback via your mobile site.
You may also consider deploying your questions via email. This is a good channel for contacting leads who have not converted or current customers who may not spend a lot of time on your site or in your app. Feedback collection via email typically involves following an external link and happens outside of the email client.
However, we don’t necessarily recommend a strong focus on email surveys as most people leave the majority of messages they receive unopened. Consider this when analyzing the stakes of your questions, as email distributed surveys are easily ignored.
If you are in the pre-product deployment phase, consider getting feedback from potential users within your mockup tool. For example, InVision has a commenting tool that can make collecting and centralizing qualitative feedback from user testers or other team members (for internal feedback loops) very simple.
When choosing feedback channels (and really all aspects of your feedback program), we like this straightforward rule of thumb from Walter Hannemann, a Product Manager at Dualog: “The less effort required from the user, the more likely it is that they will provide feedback.” We recommend meeting your users where they are for best results.
While we’ve covered many of the basics of collecting feedback so far, the truth is there are a lot of ways to get feedback collection wrong. We’ve put together a list of best practices to help you avoid these (easy to fall into) traps.
Once you know what you need feedback about, think of the questions that would help you investigate the subject. Try to only ask questions that will provide useful answers and help you make impactful decisions. Keep in mind that some questions may serve a different purpose, like engagement, to smooth out the process.
One of the biggest temptations when it comes to feedback collection is wanting to know the answers to ALL the questions you have, from a million different angles. While this may be the ideal for insight experts everywhere, the truth is that it is not only unrealistic, but also a major annoyance to users.
"My tip: I recommend having your front-end developer with you when conducting a feedback interview. It’s important for them to hear this feedback firsthand! This will make the process faster overall, and make sure everyone is on the same page."
Both vocabulary and syntax should be easy to understand while clearly communicating the subject and intention of the question. If you will be collecting feedback from a very specific group, use their language (jargon, idioms, etc.) within reason. On the other hand, if your audience is broad and diverse you’ll also want to adjust accordingly. In that case, the language used should be simple enough to be understood by people without domain-specific knowledge without losing the ability to glean useful insights.
Make sure you ask questions your users can actually answer. Test your own assumptions about your user’s level of knowledge. If there is still a risk some of the users may not be able to answer, try to phrase the question in a way that not understanding it won’t make the user feel discredited. If users feel they should be able to answer the question, they may pretend to have the necessary knowledge and skew your results. Be aware of what you assume.
"Be hyper-conscious of the language and non-verbal cues you are using. Word choice, body language, and every little form of expression can influence a respondent. If you have the budget for it, be as “invisible” as possible."
Questions shouldn’t put users in a position where they may need to provide an answer with social implications. When possible, anonymous research is preferred for this reason. If you want an honest answer make them feel that all the answers are acceptable - not only from your perspective, but in light of applicable social norms.
You should compose questions in a way that they don’t suggest any particular answer. We can identify a couple pitfalls of leading questions:
In some cases, emotionally charged questions are actually helpful. For example, you may want to ask one after a series of emotionally neutral questions to identify the edge opinions.
Double-barreled questions, or compound questions, are questions that ask about two topics but only allow for one answer. While guiding users to provide actionable insights is important, you also don’t want to influence or confuse their answers. If you ask about two things in one question, you cannot determine precisely what the user was referring to when answering the question and you don’t want to assume the answer applies to both questions.
How long did it take you to complete the training module, and on what day of the week did you do it?
How long did it take you to complete the training module?
What day of the week did you do it?
Over the past six months, how would you rate our customer service and response time?
Over the past six months, how would you rate our response time?
Over the past six months, how would you rate our customer service?
How did you like our Help Centre information and our customer support?
How useful did you find our Help Centre information?
How did you like our customer support? Rate on a scale of 1 to 10.
The order of your questions should make it easier for the user to answer questions and move from one to the next with ease. This should feel natural to your users. It’s best to start with general questions and then move to more specific questions, narrowing down the aspect you are researching.
Open-ended questions let users answer in their own words, forming their response the way they want and deciding how much information to give.
Open-ended questions allow you to collect information that may not be possible to collect otherwise. These questions may to some extent eliminate the risk of the user not having enough knowledge about the subject - they share what they want in their own words. Open-ended questions are more conversational and therefore natural to answer. However, they require more effort from the user.
Closed-ended questions, on the other hand, shouldn’t be used when speaking about general problems that are complex and when users may have different opinions or experience with different aspects of the matter. These questions will make data analysis much easier!
With closed-ended questions, the answers are included in the question and the user is asked to choose which is closest to their opinion or experience. The answers to closed-ended questions may vary. These types are as follows:
While free-form responses can elicit interesting insights, it will be easier to understand how the majority of your audience feels if you offer them defined, standardized answer choices. While this may not be possible for every question, it will make your life easier when you can implement it. Closed-ended questions should be used i f there is only one frame of reference common for all the respondents and there is no room for other interpretations.
Don’t distract your users with a flashy design for on-site or app services. While you may think that it will stand out to users, it can also impede the usability of your product. Make sure that your question or pop up matches your branding and design for a seamless experience.
If you are a multinational company, or otherwise have global customers, keep in mind that language matters. You will want to make sure phrasing of feedback collection translates to your audience and their primary or native language. Consider that nuances are important here, and it can make all the difference.
To finish off our best practices, we're highlighting some tips from one of our contributors, Noah Shrader of Lightstream. Noah shared a checklist of questions he uses to ensure he selects only the best questions for delivering user feedback.
Regardless of medium, I've found the best process for choosing questions is to write down 1-3 things you're ultimately wanting to learn (your research goals). From there, brainstorm as many questions as possible (could be general or specific). Then, begin editing. Here's a list of things I ask myself when editing down my list of potential questions.
These tools collect user feedback on websites or in-app. Some integrate as a widget that appears in the corner of the screen or as a popup. Others create a separate feedback portal where users can log-in and submit new ideas and requests.
These are tools that focus heavily on collecting user feedback in mobile apps, usually with the focus of increasing App Store ratings and reviews.
These tools are more traditional survey tools that can be used for any type of survey, not just user feedback. They are used more in email feedback campaigns because the surveys can be created and shared to get feedback from a larger set of audiences.
These internal feedback tools are useful for reporting a bug or giving design feedback within a product or design team.
These tools are specifically focused on collecting in-app Net Promoter Score feedback.
There are many ways to collect feedback, but there are also many mistakes that can be made when collecting feedback. If you have clear answers to the following questions, you can determine your feedback collection strategy much more easily.
Use Qualaroo surveys to gather insights and improve messaging.