Net Promoter Score has become the de facto customer satisfaction metric used across most industries, particularly in tech startups. Companies associate customer loyalty, the sentiment measured by an NPS score, with business growth, which is why executives closely monitor this metric as a leading indicator of growth. As a result, we are seeing more Product and Design teams being held accountable for increasing their companies’ NPS. By treating NPS as an indicator of product performance, though, these companies are missing the fact that NPS results rarely originate in a vacuum where customer satisfaction is solely impacted by product. A myriad of factors influence your customer’s loyalty, and by placing the burden on a Product team you risk causing unintended harm to your product in the long run. For this reason, NPS should not be considered primarily a product metric, and in this article we’ll explore the primary reasons why — as well as how it can be improved to be a more product centric metric.
1. It’s impacted by many factors
The number one reason Product teams shouldn’t be held responsible for NPS is simple: you can dramatically improve your company’s NPS score without making a single change or improvement to the product. Imagine that your company decided to cut the cost of an annual subscription for your product in half. Despite the lack of any product-related change or improvement, this would almost certainly cause a sizable increase in your company’s NPS.
The goal of a product metric is to evaluate the success of a product or feature, and to do so it must be specific and actionable. Due to the general nature of its results, NPS used to gauge product success can lead to inaccurate interpretations and confusion around the true impact of your product’s experience or features. If you’re using NPS to gauge product success, how can you separate the noise generated from survey results that originate from experiences wholly unrelated to the product experience? The generality of NPS makes it difficult to extract actual product insights and can actually make it more difficult to understand which insights are worth noting.
2. It’s not obviously actionable
As product leaders, we rely heavily on user feedback in order to make important product decisions. In order for feedback to be valuable, it must be actionable. While NPS data may indicate whether or not your customers view your company favorably, it rarely illuminates the specific points in your product’s experience that contributed to their survey response.
Are they a detractor because they find it difficult to navigate your product or because they had a poor experience with a customer support representative recently? If Product teams cannot tie feedback directly to actions they can take to make product improvements, then the feedback is nearly useless. Avoid survey fatigue by limiting NPS outreach and focusing on surveys that are contextual and lead to feedback that you can actually act on.
3. It distracts from more effective feedback collection efforts
To maintain a healthy relationship with your users and collect valuable feedback, feedback collection opportunities must be approached thoughtfully. When done properly, surveys target the right users at the right time within your product experience, minimize disruption to a user’s experience, prevent survey fatigue, and, most importantly, ensure that user’s feedback actually leads to incremental improvements to the product. The way that most companies approach NPS today works directly against these best practices.
To accurately determine the impact of a new feature or improve existing ones, product teams track certain analytics or changes in behavior that are tied to desired outcomes. NPS, on the other hand, attempts to estimate a referral intention, which has no direct bearing on any particular change within the product. You can only engage your users so many times before they’ll start to react negatively; you should make sure that each of those is optimized to get the absolute most value in exchange for your users’ time and attention. NPS doesn’t typically result in meaningful learnings, and as such is a poor use of one of these precious interactions.
If you’re on a product team that is ultimately held responsible for your company’s NPS score, I’d recommend you suggest that your company reconsider, citing the reasons above. There are undoubtedly more valuable metrics that your team should be concerned with. If your Product team MUST be held accountable for your company’s NPS score, here are some ways you can make it more valuable for your team.
Asking a customer if they would recommend your product leads to responses that are completely speculative in nature. Humans are notoriously bad at predicting their own future behavior, and this has a considerable impact on NPS responses and the value they can provide. Recently, we’ve noticed that a growing number of Product teams are adopting a more modern approach to NPS called “actual” NPS (aNPS), which simply removes the speculation from the question. Instead of asking users whether they recommend you, aNPS asks users if they recommended you. To further contextualize the question, some companies include a date range, like “in the last 90 days”. Approaching NPS in this manner provides a more accurate and contextual responses that is much more useful for Product teams.
NPS results are much more valuable when you can attribute different scores to specific user populations and track changes over time to measure how your product changes impacted each group. In the way NPS is generally collected, the results lack this context which is necessary to drive improvement. The easiest way to ground your NPS results in some valuable context is through user segmentation.
User segmentation for surveys like NPS involves measuring survey results by groups of users based on shared affiliations. The types of user segments you may find useful will depend greatly on your product, but a few examples include account creation date, customer persona, or level of engagement. Tracking NPS by specific user populations can lead to a greater understanding of where points of friction may exist within your product and which users are affected. This provides a Product team with greater insights on how to approach product improvement.
Surprisingly, a considerable number of teams do not ask a follow up question after NPS. This is a massive missed opportunity, as often the most illuminating information for product teams is gathered during the follow up open text response — not during NPS itself. Simply following up an NPS survey with a question like “What is the primary reason you felt this way” or by providing a free response form will provide greater detail around why users feel a particular way about your company or product.
For user feedback to provide value to product teams, it needs to be grounded in some context. The vagueness of the NPS question makes it nearly impossible to identify which interactions with your company led to a change in the score. There is still no way to determine whether these responses are related to recent product updates or a bad customer support experience they had earlier on. In order to provide value, product feedback must be contextual and actionable, two conditions that NPS results alone will rarely satisfy.
NPS may serve as a valuable health check for your company’s brand, but beware putting too much faith in the metric when it comes to measuring product performance. If your product team is held accountable for it, there are some ways to make NPS more valuable, but you should still take the results with a grain of salt when making product decisions. Be sure to supplement user feedback collection process with other surveys — we love Feature Fit Index (FFI) and Customer Effort Score — for more timely and actionable insights.
Originally published at https://www.parlor.io on February 14, 2020.