The usability scores in your maze report are not intended as an interpretation of the design but as a way to measure the ease of use of the screens, missions, and maze.

The usability score is an exact number from 0 to 100. You get a usability score for every screen, mission, and maze.

Screen Usability Score (SUS)

You get a usability score for every screen in the expected paths. The score reflects how easy it is for a user to perform a given task (mission) with your prototype. A high usability score indicates the design will be easy to use and intuitive.

Principles used to compute the SUS

You lose usability points when a user:

... clicks on another hotspot than the expected ones. This means the user got off the expected path(s), which in a live product results in frustration or a lost user.
... gives up on a mission. This is a clear indication something isn't right and should be checked.
... misclicks. It's common that in a prototype not every area is clickable, but a misclick in a live product would take the user to an "incorrect" page which leads to (1) above.
... spends too much time on a screen. Understandably, there are types of pages in a live product where you'd want the user to spend a lot of time, e.g., on a blog article or on an About us page. But in a prototype, too much time spent on a screen indicates something is wrong and needs to be improved.

How these principles are translated into data

(1) and (2) are expressed by the drop-off and give up rate. Both are equally as important. For every percent of users dropping off or giving up on a mission, you lose 1 usability point.

(3) is expressed by the misclick rate. Not every misclick is an indication of a wrong action so for every percent of a misclick, you lose 0.5 usability points.

(4) is expressed by average duration in the following way:

- From 0 to 5 seconds: no usability points lost
- From 5 to 25 seconds: 1 usability point lost every 2 seconds
- From 25 points and on: 10 usability points lost

Usability points <-> Average Duration (s)

SUS Formula

SUS = MAX(0,100 - (DOR* dW) - (MCR*mW) - (MIN(10,MAX(0,(AVGD - 5)/2))))

Which has these variables:

SUS for Screen Usability Score
DOR for drop-off and bounce rate
dw for DOR weight; The dW equals 1 point for every drop-off / bounce
MCR for misclick rate
mW for MCR weight; The mW equals 0.5 points for every misclick
AVGD for Average Duration in seconds

And these functions:

MAX: MAX(VALUE, {EXPRESSION}) => return the maximum between the VALUE and the EXPRESSION
MAX: MIN(VALUE, {EXPRESSION}) => return the minimum between the VALUE and the EXPRESSION

Mission Usability Score (MUS)

You get a usability score for every mission in your maze. As with the SUS, the mission score reflects how easy it is for a user to perform a task (mission) with your prototype. A high usability score indicates the finished product will be usable, intuitive, and efficient.

Principles used to compute the MUS

If one screen in the expected path of the mission has a low SUS, the mission also has a low usability score.
The longer the expected path, the bigger the probability that users will get lost, frustrated, or give up. Hence the sum of issues should add up to the mission's usability score.

How these principles are translated into data

The usability score of each screen (SUS) in the expected path is used to compute the mission usability score (MUS).

The usability points lost from each screen are summed up to determine the mission usability score (MUS). If one screen isn't performing well, then that screen will get a low usability score and so will the mission. For example, if your expected path is composed of 10 screens and 9 of them get a perfect usability score (SUS), but you lose 30% of your users on the first screen, then your mission will lose 30 usability points.

If the mission and expected path are long, this could impact the MUS as well. For instance, say you lose on average 5% of users on every screen. After 10 screens, you would have lost 50% of users. This results in 50 usability points lost.

MUS Formula

MUS = 100 - SUM(100-SUS)

Which has these variables:

MUS for Mission Usability Score
SUS for Screen Usability Score

Maze Usability Score (MAUS)

You get a Usability Score for every one of your live maze tested. A mission's usability score (MUS) doesn't have an impact on the other missions' scores. The Maze Usability Score (MAUS) is thus an average of the usability score for every mission in the maze.

MAUS Formula

MAUS = avg(MUS)

Which has these variables:

MAUS for Maze Usability Score
avg for the average
MUS for Mission Usability Score
Was this article helpful?
Cancel
Thank you!