Usability testing typically involves watching participants complete tasks and observing behaviour and problems they are having on a website, app or any other digital product. At Webcredible we conduct 1000s of tests each year.
The best part of usability testing is finding the key insights that will make a big difference to the user experience of your product or websites, and are therefore going to have the biggest impact on your bottom line.
When it comes to user testing not all observations are equal. You need a system to sort the important insights from the negligible.
You should also be looking for evidence that users think in a certain way, giving you evidence to justify new value propositions
If you're setting participants open-ended tasks you can normally define what counts as task success. For example, if we set participants a task to find a specific piece of information, then we can gauge whether they found this or not.
For each participant, we can set a success rate for each task. Once you've completed your analysis this will give you a quick snapshot of where users are most struggling. When there are a lot of 'Fails' around a particular area, it really grabs everyone's attention.
Using task success rates gives a quick snapshot of where users are struggling
You can define or adapt your own definition but a system you can use for defining success rates could be:
You may want to grade down to a fail if: (a) you feel participants could only achieve the task because they were helped by the moderator; or (b) if they say that in real life they definitely would have given up before the end.
When participants encounter a usability problem, you can grade the severity of the problem. This is important because 20 minor problems might be equal to 1 severe problem.
Here is a system you can use to tag your findings:
You'll still need to use your good judgement to decide on how you prioritise the fixes. For example you'll need to consider the likelihood of these issues happening in real life and to how many users. However this is a good start and will help you be more systematic.
Don't forget that usability is only half the value of usability testing. You should also be looking for any evidence that users think in a certain way. This may give you clues as to what motivates users or how a process might be more natural or efficient for them.
This kind of insight can help shape decision way beyond usability fixes and give you evidence to justify new value propositions, approaches to the overall design and types of content. These may not be that explicit and that's part of the skill of being a great user researcher.
When tagging your findings you can use a system like this or adapt your own:
Oh, and finally, don't forget to note down positive findings, especially if participants look at competitor websites.
So now you have a more formal system for analysing your usability tests. In practice, you can tag things in a spreadsheet or just use post-it notes. It's the logic that counts.