Dawid Frątczak
Head of UX/UI Design

Design

10 min read

June 12, 2024

Understanding and Implementing User Feedback: Insights from a Head of Design

Listen to this podcast on Spotify!

In the fast-paced world of product development, understanding and effectively implementing user feedback is crucial for creating products that not only meet but also exceed user expectations. Feedback from end-users provides invaluable insights that can drive improvements, innovation, and ultimately, the success of an application. However, the process of collecting, analysing, and integrating this feedback is not without its challenges. It requires a keen understanding of user behaviour, a strategic approach to prioritising changes, and a collaborative effort across design and development teams.

Let's delve into the nuances of this process with Dawid, the Head of Product Design at TeaCode. Dawid shares his expertise on how to navigate the complexities of user feedback, balancing immediate user requests with long-term business goals, and ensuring that every change made contributes to the overall success and sustainability of the product. Through this conversation, we uncover the strategies and best practices that can help teams make informed decisions and deliver products that truly resonate with their audience.

Okay, Dawid, let’s start with what user feedback means.

User feedback consists of crucial information and opinions provided by users regarding their experience with the app. It's a valuable form of feedback that helps my design team understand what works well, what needs improvement, and what changes should be made to better meet user needs.

However, we need to separate the concepts of feedback here. You have feedback from the client and from the end-user. I'll refer to them this way. The client often has their own vision and conveys feedback they receive from users based on this vision. However, the most important feedback comes from the end-users because it genuinely verifies whether our ideas are good.

How does your team start user feedback collection?

The question here is what kind of feedback and whether our team is the one that collects it. In reality, that's the role of the product owner or product manager.

It is worth distinguishing two types of feedback that we can receive: moderated and organic. Moderated feedback involves intentionally eliciting opinions from users. It's a way to obtain information about customer expectations through A/B testing, surveys, or asking questions about using the product's features. In this case, we can indeed conduct our usability tests or propose a survey.

On the other hand, organic feedback comes from users who use the app independently and share their observations, often highlighting aspects we, as creators, hadn't thought of. This is the most valuable type of feedback, providing invaluable insights for product improvements. Our role here is to enable users to provide such feedback by incorporating solutions in the app that allow them to send this feedback.

Do you often collect feedback on your own, or do you usually receive ready-made suggestions on what to improve?

Usually, the feedback we collect is filtered through the client’s perspective. The client receives organic feedback, where users state what they don’t like, and passes it on to us. Based on this feedback, we make product decisions, although sometimes it is the client who sets the priorities for changes.

We always keep in mind that this feedback can already be partially filtered and we may not have full insight into the end-users’ needs and problems. So it’s important to make the client aware that we should have access to most of the end-users’ feedback.

For us, the business perspective is vital because, to create a good product, we need to understand the business context and market needs. We try to explain to clients that raw material is crucial because everyone can interpret feedback differently.

From your perspective, what project worked very well based on this principle of sharing all the feedback?

In one of our projects, we had the pleasure of working with a client who provided us with full analytics and data, which made our work much easier. We had full access to the system. When we had difficulties using certain panels, they simply arranged meetings with us and trained us. This was truly a partnership approach, allowing us to gather a lot of information and better understand the user. And as you can probably guess, this allowed us to create a better product.

So everything went smoothly?

Not exactly. Another issue was problematic here: the client didn’t conduct user research before releasing the product. They only started doing this once interest in their product grew. Only then did we have access to data on user numbers, sales, etc. However, feedback appeared spontaneously, for instance on social media. So, although we had access to panels and pages, and we could see demographics, we lacked direct user opinions.

Nevertheless, clients sometimes hesitate to share full business information, making it harder for us to fully understand the business. Despite this, we try to make the best use of the available data to create a product that satisfies both the client and the end-users.

How do you react to feedback from end-users that you receive from the client? Do you immediately implement the suggested changes?

Let’s start with what those suggested changes are. Sometimes, there is a specific request like, “add a button because the user wants to perform an action.” Generally, we don’t just sit down and implement such changes right away. We usually think about what underlies such a request and whether the solution proposed by the users will actually meet their needs. We discuss it internally, choose the best scenario, and propose our solution to the client for the problem they bring to us, rather than asking them how to do it. This shortens the implementation time. Our experience helps here, as we can push our ideas effectively.

However, there’s another aspect. Often, feedback is still vague, such as when you get feedback that something is unintuitive. Sometimes it’s a problem like “the user can’t find a certain feature.” What should you do with that issue? Why is it happening? How do you solve it?

And what do you do then?

In such cases, our team plays a crucial role by analysing the problem in detail. We break the problem down into smaller parts to better understand how to address it. We usually involve specialists not only from design but also from various fields to assess how to solve the problem, whether it can be done quickly, and what other aspects might be related. Sometimes, what seems like a minor, design-trivial change to implement, might require a complete rebuild of the logic or database architecture during the development process. Such cases require careful planning and breaking down these problems to ensure the change brings the expected result.

For example, we recently had a case with an app where the client wanted to add a new feature – the ability to follow accounts that publish content. Initially, it seemed simple because both the client and the publishers wanted to increase user engagement. We started with the idea of adding a ‘follow’ button, but the team spent a week fine-tuning the details of this solution. So, we worked on documentation for that entire week before starting the actual implementation. This shows how important it is to thoroughly understand and plan every element before taking action.

Why did you spend a week on documentation for one button? It seems like a long time from the outside!

It might seem long, but every new feature, even something as simple as a button, can have many implications. We had to consider how this button would interact with existing features, its placement, its behaviour in different scenarios, how it affects user experience, and the backend processes it would trigger. This level of detail ensures that once we implement it, it works seamlessly without causing other issues or needing further revisions.

What challenges are associated with the process of collecting and interpreting feedback from end-users?

It all depends on the budget. The client can filter feedback and draw conclusions themselves, which is usually cheaper, or outsource it to us, which will involve additional costs. Regardless of the approach, the key is that feedback should come from end-users, not just the client. Their opinions are the most important because they validate our ideas.

This is especially true for startups. The first project often relies on the team’s assumption that it’s a good idea to start with. Going to market, we must be prepared for the reality that actual feedback can significantly change the product. This is a difficult moment because you’re stepping out of that safe space, and the market starts evaluating our assumptions, which we worked on for a long time. And when you’ve worked on something for a long time, you get attached to it, especially if it’s your ideas.

So you always start with assumptions?

Not always, but often. Thorough research and user testing at every step significantly prolong the market entry process for the entire product. And here is a very business-driven decision about which market entry strategy to choose. You can iterate with users on prototypes or early alpha and beta releases or simply release the product to the market and gather feedback directly on the finished product.

If you have time and budget, you can iterate. If it’s a truly unique project and no one will overtake you, you have time. But usually, the market is highly competitive, and everyone wants to be the first to capture the niche. Especially in the case of startups that don’t have funding yet – they need to get funds from somewhere because it’s never a bottomless pit.

Basically, you usually lean towards one of two options: either you operate on assumptions and limited tests (here remember, the more experienced the team, the closer the assumptions will be to market realities). Or, alternatively, you can test each feature for a couple of weeks or months, which delays the launch but allows for more thorough product preparation.

So, it’s very difficult to find that golden mean. Launching too quickly can be risky because the product may not be good enough or well-suited to the client, while too long testing can cause you to miss the perfect moment for launch and potential earnings during that time.

What does the implementation process look like?

Well, it’s an interesting case. First, you need to gather and record all the feedback and issues reported by end-users. These are not bugs (although we do receive such feedback), but rather things that work but could work better. For example, an unresponsive menu on mobile devices.

Typically, you have a list of ongoing tasks to be done on the project. You also have a backlog of potential improvements. Then, you need to decide what is a priority. Should you continue product development according to the schedule, or focus on maintenance and improving the existing product? Planning is crucial here.

For example, we plan to spend 60-70% of a sprint on development according to the schedule, and the remaining 30-40% on improvements based on current feedback. This way, we can quickly respond to user opinions and continuously improve the product after launch.

Who is responsible for managing priorities and deciding which changes or new features should be implemented now and which can wait?

That’s the role of the Product Delivery Manager (PDM) or Product Owner (PO). They need to decide what to implement and in what order. Good cooperation between the PM and the design & development team is crucial here. It’s very important for the PM to have broad competencies, because verifying, planning, and specifying tasks is time-consuming, and that’s the domain of the PMO (Project Management Office).

I try to ensure that the PDM is highly involved in the entire process, so the designer can focus on designing and solving problems. Defining and understanding tasks is very important for the PMO, which is why this cooperation is key.

At TeaCode, we also involve developers at an early stage of task definition. This helps us better understand what is feasible and what is not. And this is very helpful.

So a developer can already tell you at the very beginning that a certain approach will be difficult to implement or that something can be done more easily?

Yes, exactly. By involving the developer at the very beginning of the process, we can identify potential difficulties in implementing a particular approach. This saves us time and resources, and optimises the process of designing and implementing features.

Going back to the feedback, this time from the client. What do you do when you receive feedback with a request like “build this feature” or “I want it to be like this and that”? How do you handle requests for new features? Is there a difference between what users report and what may result from deeper issues?

That’s a very good question. We always ask about the business context of each of them. We have to filter these requests through the lens of benefits for the application and the client’s business. That’s why we have thorough discussions with the client about the business sense of a particular feature. We challenge the client if it’s their idea.

Even if we work very closely with clients and build the product based on their feedback, we can’t blindly follow the client’s demands on how the product should look. Our role is to advise the client and help them build an app that will be successful. Sometimes this means telling them that something won’t benefit the app. We have to moderate these requests and take responsibility for the final shape of the project. Listening to feedback, filtering it through a business perspective, and implementing it wisely – that’s the recipe for success.

What if the client insists on having that feature?

Then they’ll get it, of course. Our role is to advise, explain, and provide the best value for the client’s business, but ultimately it’s the client’s project. If they know the risks and still want to invest in a particular idea or feature, we proceed.

Can you tell me about a situation where this approach in a project, based on business analysis and asking the right questions, helped better understand the needs of the users?

For example, we had a situation where a client wanted to add a feature to save certain items on a list. We were already evaluating this feature and working on the roadmap. The next step was to create separate pages with visualised lists of these items, but it was planned for later. However, we started asking business questions: Why do users need this saving feature? What will be the benefit?

As a result of the discussions, we concluded that it was better to focus immediately on the other functionality that was planned for later, which was more useful for the users and the application. We changed the order of tasks and the timeline. It was a good decision, resulting from asking many questions and analysing the business context.

At the beginning of creating a product, it is important to consider each feature in the context of potential benefits. Clients often want something specific, but we always have to ask whether it will actually bring benefits.

Let’s clarify one thing – is your goal to ensure that everything you do has a real benefit for the client’s business and their profits?

Exactly. Profit isn’t just about the money generated from the product; it can also include brand image gains, from implementing well-thought-out solutions.

Now let’s talk a bit about wasting money… What do you do if you receive feedback while designing an app that turns everything upside down? Or when you need to change half of the project. That’s a cost, not a minor one. How do you deal with it? Is it relatively easy to do?

It depends on the scale and nature of the change. Sometimes it can be a small thing, and sometimes a big one. The key is to first understand the problem. As I mentioned earlier, sometimes the solution doesn’t lie in literally implementing what the users suggest.

For example, if the feedback is about an inconvenient menu, the problem might not be the menu itself, but the contrast with the background. It’s important to understand the context and then address the problem.

Have you encountered situations where there were significant challenges, with major changes in the project just before completion?

I once had a case in my career where we designed an entire portal, and the client suddenly announced that they had been working with a branding agency for the past three months, which changed all the fonts and the entire style – from, let’s say, green, rounded elements to square, red ones. We had to redesign the whole system literally a month or two before the launch.

We took an MVP approach, so we designed it to be aesthetically pleasing and nice, with the intention of iterating later, but they decided to make changes just before the launch. It was pretty hardcore, but we did it.

However, we were prepared for such situations. I believe it was a good change. The agency did a great job – the design is unique and original. Although, it was a change initiated by the client, not resulting from user feedback.

How do you usually evaluate the effectiveness of the implemented feedback? What do you most often rely on to determine if it was a good decision?

There are several ways, but it largely depends on the available budget. Tests, like surveys after implementation, can be costly. Simpler methods include tracking various user activities through in-app events that inform us about their behaviour, such as clicks or navigation through the app.

For example, we can check if users are using the new feature, how they interact with it, the drop rate, and so on. This data is relatively easy to obtain and unobtrusive for users. However, it’s not an ideal approach. Personally, I’m not a fan of surveys because they require user engagement to fill them out. I prefer feedback that users don’t even realise is being collected. It’s natural and unforced.

In my career, I loved comments under the project when we released an app on Google Play or the App Store. They are direct and save a lot of time and money compared to traditional feedback collection methods. Of course, opinions are usually at two extremes and can be either very enthusiastic or very negative, but they allow for valuable insights.

It’s important to look at the effectiveness of feedback from different perspectives and try to understand how it actually works in practice.

How do you evaluate the usefulness of a specific feature in the app? Do you use, for example, single-question surveys, asking users about the ease of use of a particular feature or their satisfaction with its performance?

In the current projects we are working on, we do not use such surveys due to their limited relevance in the context of advanced projects. We often work on entirely new projects that do not yet have users or traction.

However, I am a proponent of short UX surveys. Open-ended feedback often requires more engagement, so short surveys can be an effective alternative. If you have users in the app, you can easily conduct them on a live product. If not, while still creating it, there are tools that allow you to test design prototypes. It’s a great thing.

And do people give you feedback for free?

Unfortunately, it’s not that simple. Rewarding users for giving feedback is now a common practice. Offering rewards for reviews, regardless of whether they are positive or negative, has become the norm. It doesn’t necessarily diminish the value of the collected feedback; it just encourages more people to provide it.

It also depends on what the reward is – if it’s in-app credits or a free trial period, it’s likely an attractive reward for someone who likes the app, at least moderately. Enthusiasts don’t need much convincing; they will gladly share their thoughts. On the other hand, someone who is frustrated with the app will probably give feedback for free and vent their frustrations anyway.

Are there model steps in the process of implementing feedback that you consider essential to maintain?

In the beginning, it is always (really always!) necessary to have a deeper understanding of the business context of the problem and conduct a thorough analysis. Feedback should not be treated as a simple to-do list. For example, if you receive information that users don’t like the colour of the navigation bar, your task is not just to change the colour but to understand why it is unacceptable. Perhaps the problem is not with the colour of the navigation bar itself but with the contrast with the background. Changing the background might solve the problem. This deeper understanding is a crucial first step.

Event storming is often very effective. We use it frequently in my team when working with developers to analyse and understand problems. We break the problem down into atoms, and discuss it together, so the result is a clear mapping of the problem and a session of questions for the client or users that help us understand the problem better and find solutions.

Once we understand the problem, we create documentation that describes how to solve it. Both designers and developers need to have a clear picture of the solution. Then we proceed to estimate the time needed for implementation, and after approval from the client, we move on to designing and programming.

Finally, could you give one valuable piece of advice on what to remember in the process of implementing user feedback?

The most important advice I would like to give is not to take user feedback literally. Instead, focus on understanding why something is a problem for the user. Finding the root of the problem and solving it is key. It can’t just be a reaction to a specific user request.

Thank you, Dawid, for sharing your valuable insights and experiences on the important process of implementing user feedback. Your practical advice and real-life examples provide a fresh perspective on how teams can implement feedback effectively, use feedback in design, and navigate challenges in balancing user needs with business priorities.

I believe our readers (or listeners) will now have a deeper understanding of how thoughtful planning and collaboration can turn user feedback into significant product enhancements!

Curious to get more insights from Dawid? Explore Dawid’s tips about UX enhancing and app design strategies on our blog!

For those seeking assistance with UX design in their app, don’t hesitate to reach out to Dawid via email at dawid.fratczak@teacode.io or catch him on LinkedIn. He’ll be glad to assist you in achieving your design goals and driving your app towards success and growth!

By clicking this button you agree to receive information from TeaCode about software development and app marketing, the company and its projects to your email. Your data is processed by TeaCode (Postępu 15, 7th floor, 02-676 Warsaw, Poland) to send you relevant content via newsletter (from which you can unsubscribe at any time). You can read more in our Privacy Policy.