Pushing your usability testing analysis further

by Jack Josephy on 8 January 2017

Usability testing typically involves watching participants complete tasks and observing behaviour and problems they are having on a website, app or any other digital product. At Webcredible we conduct 1000s of tests each year.

The best part of usability testing is finding the key insights that will make a big difference to the user experience of your product or websites, and are therefore going to have the biggest impact on your bottom line.

When it comes to user testing not all observations are equal. You need a system to sort the important insights from the negligible.

You should also be looking for evidence that users think in a certain way, giving you evidence to justify new value propositions

Task success rates

If you're setting participants open-ended tasks you can normally define what counts as task success. For example, if we set participants a task to find a specific piece of information, then we can gauge whether they found this or not.

For each participant, we can set a success rate for each task. Once you've completed your analysis this will give you a quick snapshot of where users are most struggling. When there are a lot of 'Fails' around a particular area, it really grabs everyone's attention.

Using task success rates gives a quick snapshot of where users are struggling

You can define or adapt your own definition but a system you can use for defining success rates could be:

  • Pass easy - Completed task with no usability problems
  • Pass moderate - Completed task but ran into one or more minor usability problems
  • Pass hard - Completed task but ran into many minor usability problems or one or more major problems
  • Fail - Could not complete the task due to usability problems

You may want to grade down to a fail if: (a) you feel participants could only achieve the task because they were helped by the moderator; or (b) if they say that in real life they definitely would have given up before the end.

Usability severity

When participants encounter a usability problem, you can grade the severity of the problem. This is important because 20 minor problems might be equal to 1 severe problem.

Here is a system you can use to tag your findings:

  • Severity 1 [S1] - Issue caused minor inconvenience, negative comment or suggestion for improvement
  • Severity 2 [S2] - Issue caused significant barrier to completing the task
  • Severity 3 [S3] - Issue prevented the user from completing the task

You'll still need to use your good judgement to decide on how you prioritise the fixes. For example you'll need to consider the likelihood of these issues happening in real life and to how many users. However this is a good start and will help you be more systematic.

User needs insight

Don't forget that usability is only half the value of usability testing. You should also be looking for any evidence that users think in a certain way. This may give you clues as to what motivates users or how a process might be more natural or efficient for them.

This kind of insight can help shape decision way beyond usability fixes and give you evidence to justify new value propositions, approaches to the overall design and types of content. These may not be that explicit and that's part of the skill of being a great user researcher.

When tagging your findings you can use a system like this or adapt your own:

  • Decision making [DM] - Findings that suggest users' decisions are motivated by a particular criteria (e.g. lowest price).
  • Mental model [MM] - Findings that suggest users see the world in a certain way which impacts the way they interpret information and behave (e.g. users expect to go on comparison sites before purchasing)
  • General observation [GO] - Other findings that might be interesting to your research.

Oh, and finally, don't forget to note down positive findings, especially if participants look at competitor websites.

In conclusion...

So now you have a more formal system for analysing your usability tests. In practice, you can tag things in a spreadsheet or just use post-it notes. It's the logic that counts.

To learn even more, come on our great 1-day usability testing course. Alternatively if you need a partner to help you with UX research (and a lot more) we're happy to help - get in touch

Thank you for your comment. It has been submitted for approval.

Comment Again?

Leave a comment

Please enter your name.
Sorry, this email address is not valid.
Please enter your comment.

Course basket x


Price per place

Add another courseCheckout

Pay now by credit card or later by invoice (invoice payments only possible if all courses are 3+ weeks in advance)