Home > User Experience > Identify the urgency of your Usability FIREs

Identify the urgency of your Usability FIREs

Identify the urgency of your Usability FIREs


As UX practitioners, usability studies are commonplace. However, articulating our findings in a way that effectively describes the urgency of the issues to be acted on is not always straightforward. Below I will discuss how we could use a FIRE Score, a set of indicators that adds more context to a problem, in order to help product groups understand what issues need to get resolved immediately and what may be able to wait until later.

At various points of the product life-cycle, a team will receive signals of how successful customers are accomplishing tasks within their system. Ideally, any issues in their flow will be uncovered early on during their usability research. If not, they will likely hear about them soon after through user behavior data, customer service calls, NPS, or countless other methods to observe signals from their users.

Like all problems, UX issues range in severity. Some are going to be small superficial problems and others are going to be large ugly problems, but even with those designations, we have an incomplete understanding of the urgency in which the problem will need to be addressed. The reality is, product teams typically have a plan, at least they think they do, and then something happens that completely flips that plan on its head. Mike Tyson said it best, “Everyone has a plan ‘till they get punched in the mouth”. How we respond to these situations and how it impacts their plan will depend on a variety of factors.

It is inevitable. Eventually, something will come up, and your team will be forced to make the decision on whether we spend time fixing a reported issue or hold off and instead work on the next highest priority task on the list. This is one example where conflict might start to emerge between the product person and user experience person. On the one hand, the UX doesn’t want to create design debt that has been found to impact users and will need to be addressed later. On the other hand, the Product might want to accept that risk and incur the debt in exchange for working on higher-impact initiatives. Neither is wrong, but both are likely talking past each other leading to frustration.

In a previous post, I spoke a bit about how the words we use to communicate matter. In the sections below I will introduce the Severity Rating Scale that we use at Macmillan Learning, as well as some additional measures that might help a product team align, or at the very least understand the urgency of the issue.

The first scale a team needs to be able to communicate is how severe the task failure was. There are various scales that exist, but the scale that we have used on our team inspires by Nielson & Norman:

  • 0 = I don’t agree that this is a usability problem at all
  • 1 = Cosmetic problem only: need not be fixed unless extra time is available on the project
  • 2 = Minor usability problem: fixing this should be given low priority
  • 3 = Major usability problem: important to fix, so should be given high priority
  • 4 = Usability catastrophe: imperative to fix this before the product can be released

For the most part, these work fine, with the random exception of “catastrophic”. I’ve found that product people sometimes see this designation as being alarmist and disruptive; not willing to budge on their plan in order to make room for fixing the identified blocker. They are not wrong to call this into question since product owners are looking beyond this sole input in order to make decisions. I have also found that multiple signals, other than task success, may also be conflated into one score. This can end up making the score meaningless. Unfortunately, it might not be apparent what those factors are unless we identify them in order to quantify how big of a FIRE the issue truly is.

Fires come in all sizes, from the flame on the head of a match to devastating building fires that require teams of firefighters to control. The acronym FIRE reveals 4 key factors that may influence how urgent the UX fire really is.

  • Frequency (1–5 rating): How often will an individual user run into the problem? The more susceptible a user is to run into this problem, the higher the likelihood they will get frustrated by the constant effort they need to place in order to complete a task. A good example of this is pogo-sticking when evaluating content. It is a repetitive task that is draining.
    Example: A website’s home page might have a lot of frequency. However, the registration page would have a low frequency.
  • Importance (1–5 rating): Importance has two lens that you might want to look through. First, you might want to describe how much the issue would impact the core functional jobs that the human is attempting to achieve with you product or interface. Secondly, you might want to describe about how important the issue is to achieve a specific business objective that your team is striving to achieve. It is important to reflect on issues in terms of both the human and the business.
    Example: Your company’s goal is to reduce support call volume and the issue. if not addressed, may lead to a sizable increase to support calls.
  • Reach (1–5 rating): How many users of your system will run into this problem? This bears similarities to Frequency, but are quite different. Frequency focuses on the individual running into the problem multiple times throughout their life with the product. Reach has to do with how many unique individuals will run into the problem. It is possible that something will have a low frequency and really high reach.
    Example: Your website’s registration page that everyone in the product will need to visit.
  • Effort (1–5 rating): What is our best guess for the design effort we will need to put forth in order to address this issue?
    Example: Changing a the label on a button may be easy. Reimagining an entire user flow may be difficult.
Photo by MD_JERRY on Unsplash

Frequency, Importance, and Reach are 3 distinct Impact scores, meaning that their rating will likely have a notable impact on user satisfaction of the product as a whole. If task failure is happening on a key user journey, and that failure is happening all the time to every single person that logs into your product, the urgency for you to address that problem is going to be really high. Compare that to a micro-level task that only a small percentage of users will experience if they happen to match some very specific edge case, the urgency might be quite low.

We could combine these scores to create a single FIRE score, by adding up
Frequency + Importance + Reach = Impact

Once you have both Impact and Effort values, you and your team could start prioritizing what issues to fix first, and what might be okay to hold off for later, or possibly even completely ignore. There is no exact science to these types of prioritization, but they serve a great purpose in getting cross-functional teams communicating and understanding decisions, narrowing down to a workable bunch.



Source link