5. What to test
Last Updated: 4/21/25
Site goals, user tasks, testing scripts and metrics
When conceptualizing a website we rarely articulate what users must do, instead, we focus on what our site does.
Before testing, it is extremely important to articulate your sites’ goals and the tasks users must perform to meet those goals. Jackob Neilsen’s five quality components- learnability, efficiency, memorability, errors and satisfaction- can help frame your site goals and user tasks.
Site Goals
To meet each goal, users may need to engage in different tasks, such as navigating different paths. By articulating your goals in concrete terms, you can focus your site’s design and what to test.
Example Site Goals
- Receive donations and present mission
- Provide legal information
- Reach out to potential donors
- Pre-screen potential clients
- Provide contact information for your agency.
To help you articulate your site goals you can view this Goals, Tasks & Script Worksheet
User Tasks
User Tasks
Once you articulate your site’s goals and the steps users must take to complete these goals, you must articulate specific questions or tasks. Frame your questions to ensure users can accomplish realistic tasks that reflect concrete goals.
Some questions that you could ask include:
- Can a first time user find my agency’s mission?
- Can a return user remember how to find my agency’s contact information?
- How much time does it take for a user to locate a resource about eviction?
- How many errors do users make when trying to locate the “contact us” feature?
- When users navigate to the wrong page, can they go on to find the information?
- Do users respond differently to a red vs blue navigation element?
- Do users enjoy interacting with the site?
Establishing the tasks users engages in can help create a focused, specific test yielding results you can readily implement. Often a user test will encompass more than one question, however the more specific the questions, even when combined into one test, the more effective the test will be.
To help you articulate user tasks there is a Goals, Tasks & Script Worksheet below
Tester Scripts and Simple Tasks
Tester Scripts and Simple Tasks
When evaluating your site with testers, there are two broad ways to obtain information. You may give your testers simple tasks to complete or you can use a script and offer your testers real-life scenarios in which they use your site to access information.
Tester Script | Simple Task |
---|---|
Context around tasks users should perform. | Concrete instructions: |
You received a three-day notice of eviction and need information | Download eviction information Create an account |
To help you create tester scripts as well as site tasks there is a Goals, Tasks & Script Worksheet below:
Testing Metrics
Testing Metrics
Once you have identified what you are testing, you must determine what metrics to collect. Your metrics will impact the type of test you conduct. Below are metrics you can collect, broken down by quality components which were introduced earlier.
Learnability:
How easily a user can accomplish a basic task the first time on the site.
Measure learnability by recording how quickly your testers learn to use your website. Monitor the clicks or amount of time it takes your users to accomplish given tasks. Typically, the first task should take the longest as the user adjusts to your website. Be sure that the tasks used to measure learnability take close to the same amount of time for the ideal user.
Memorability:
How easily can return users reestablish proficiency.
Measure memorability by testing a user on your website, and then testing them again after some time has passed. Can they complete the tasks quicker or with fewer clicks than their first attempt? You can also conduct post test surveys to ask users about your interface. For instance, ask them to identify your icons, and what those icons represent.
Efficiency:
How effectively your users interact with your website.
Measure efficiency by counting mouse clicks, mouse movements or times spent completing a task as recorded by a usability platform or program. The more mouse clicks/movements to complete the task, the less efficiently the user is interacting with a site. You can create a simple remote click test on sites such as UsabilityHub.com or in your analtytics platforms to determine where people click when asked to find information.
Errors:
How many, severe and permanent are user errors on the site.
Measure errors by the number of times a user navigates to the wrong page or chooses the wrong page when asked where information can be found. While this is not a ‘fail’ and often users will go on to successfully complete the task, the number of errors can highlight areas of improvement on your site.
There are several different types of errors to be mindful of:
- Slips: When a user mistakenly presses the wrong key- reduce data entry to avoid these errors.
- Mistakes: users enter incorrect information.
- User Interface Problems: Users navigates to the wrong place to find information.
- Scenario Errors: Errors in the testing script that wouldn’t affect real users.
Satisfaction:
How pleasant is it to use the design?
Measure quantitatively, as on a survey scale, or qualitatively, such as feedback from open ended questions to solicit feedback about how much the user enjoyed the experience.
Please see Sample Satisfaction Survey