Download 'printer-friendly' PDF version (File size: 62 KB)
There are several common mistakes made by those conducting usability tests. Most of these, once identified, are easy to avoid.
Mistakes are not confined to the novice user: complacency in experienced test administrators often leads us to make similar mistakes.
Video-taping of test sessions (when this is undertaken) provides an excellent opportunity to review your own performance as a tester. If you do not video-tape, consider having another experienced tester observe specifically with a view to critiquing your performance.
Perhaps the most common strategic error is premature testing. Usability testing of a system that has several significant readily-identifiable usability issues should be postponed until those issues are addressed. Otherwise, much of the testing time may be spent uncovering those known issues.
Usability testing when there is insufficient time or will to implement recommendations is wasteful. If there is no time or budget to act on the findings, you should question whether testing is appropriate.
The logistics of all but the simplest usability test present opportunities for error.
Always conduct a pilot test to uncover problems with your plan and with the materials to be used.
Failure to run a pilot test means that problems with the test materials and equipment are only identified during a live test, and are likely be more difficult to rectify. A pilot test does not require 'real users', but can be done using almost any available person as a participant. In all other respects the pilot test should be as realistic as possible.
Any script should be read aloud, as written text often does not read well or smoothly when spoken and will need to be modified.
Ensure that you schedule sufficient time between test sessions. As an absolute minimum, allow 30 minutes so that if a participant arrives a little late you can still run a complete session.
It is better to allow even more time between sessions - in particular so that notes can be typed and discussion of issues can take place.
Allowing insufficient time between sessions tires the facilitator and others involved in testing, tends to cause test sessions to blur together, and is likely to diminish the quality of the results.
In order to test the system adequately, you need to ensure you use tasks that allow you to test core functionality and any areas you have identified as potentially problematic.
Where scenarios were used during analysis and design phases, these same scenarios should be converted into tasks to ensure that key interactions are studied.
Testers often provide too much information inadvertently. For example, tasks may be worded in such a way that they contain keywords (such as 'register') that the participant simply locates on the system.
Testers sometimes use terms that contain additional information. For example, 'Did you notice the navigation bar?' tells the participant that part of the screen is supposed to be used for navigational purposes.
The appropriate attitude for a tester is one of professional detachment and neutrality.
Using encouraging terms like 'Good' or 'Well done' may give the impression that the user, rather than the system, is being evaluated.
Avoid the temptation to finish participants' sentences for them, or to verbalise what you think is in their minds. Instead, maintain your silence, listen, and be attentive.