Usability Benchmarking Competitive Analysis

In order for you to determine if the change of design will enhance the usability of your website, you will need to carry out a usability benchmarking. This will help to weigh the relevant resources and strategies used in the design, which factors the website’s usability. You will need to appraise the process and the performance of your website with that of your competitors. By doing this, you will have the tips on making your website better by adjusting the relevant sections.

Usability Benchmarking Competitive Analysis

Select Representatives, Avoid Random Selection

When you need to choose a number of users for testing the usability of your site, it will be difficult to choose them randomly. You should have a new way of analyzing how usable your website might be. You could have the proportion of new versus existing users or rather, local versus international users. This will be easier and more reliable than selecting them randomly. Normally, you will have some error when you carry out a random selection. And it would also be riskier to do so, especially when it comes to usability.

Carry out triangulation With The Use of Several Metrics

Usability normally involves the blend of satisfaction, efficiency and effectiveness. For that reason, the metrics that you apply should contribute in each of the three aspects. Normally, this will be done with a blend of time on task, rates of completion, satisfaction measure of task and test level and inaccuracies.

Approximate the Size of the Sample using a Set Error Margin

If you carry out a test that needs no comparison, you will derive the required size of the sample from how exact you need the measures to be. You may apply the 20/20 law for faster calculation, which implies that, in order to get a 20%margin of error, you will need to have 20 users. If you split the error margin in a half, you will be needed to have four times the size of the sample.

Therefore, if you set a 10% margin of error, you will need about 80 users. And a 5% margin of error will need about 320 users in order to get the required results. In general, if the margin of error is halved, then the number of users is multiplied by 4. If you are having a given sample size like, 320 users, when you are reporting the average time and rate of the completion or satisfaction level, you will have 10% wide points. That implies that 5% will be added or subtracted to or from the margin of error.

Balance the Tasks

You will be needed to swap the task presentation, which helps you to reduce the unwanted series of effects. Typically, 1 or 2 tasks have a lower metric of performance because since users are still familiarizing with the application and test. As the users spend more time in accomplishing certain tasks, then their completion rates and time will improve in turn. It would be better if you balance the learning effects by randomizing or counterbalancing the task order.

Gather Both the Post Task and Test Liking

There are several usability questionnaires which will offer a more rigid estimate of the overall usability impression of a given website or application. Some questions like the Single Ease Question – SEQ are more susceptible to the higher task-time, errors and problems of usability. The satisfaction level of the tasks could be united with the rest of the task level metrics to form one usability metric.

Merge the Measures into one Usability Metric

As you are recording the several metrics to evaluate the usability, you may normalize them and merge them into a SUM (Single Usability Metric). This will ease things when you want to present the task or system usability or a dashboard, while you still maintain the content in the element metrics. This is if you want to do a deeper analysis.

Apply Poise Intervals Across The Metrics

The data derived from a given sample will diverge from the whole population of users by a certain amount. The difference achieved is referred to as sampling error and is measured using the margin of error. The margin of error is achieved by calculating the poise intervals of all the measures. This will give you the most probable range of the average of the overall population. If you want to evaluate the computation, you can use a Quantitative Starter Package.

Carry Out a Pilot Test

You can disclose a number of apparent errors with the application or design of your website before the comprehensive test, by having 1 or 2 people carry out the test. Conducting a pilot test will help to cut down the doubt in task settings, it enhances your analysis quality and prevents any embarrassments.

Have Some Speeder/Cheater Detection for the Sake of Remote Tests

Almost 10% of the usability test-takers and survey partakers will go through your website in order to gather the score. For that, you will need to detect them with the help of some questions and eliminate them from your study. If you have a cheaters/speeders percentage that is greater than 20 will mean that your tasks are more convoluted and that the test could be very long.

Keep The Failed Task Times

When you have recorded your task time, you should not throw the failed task times away. You may report an average time to failure, task completion time or time on task. You can use these aspects for comparing your website overall performance with your competitors during a benchmark analysis. They can act as your reference values for the future evaluation of your site’s usability.

Always keep the values that you have got from your metrics and different areas, in order to use them for a further evaluation, if you carry another analysis in the future. If you want to make a successful usability benchmarking analysis, these tips can help you in achieving the best result in relation to your users’ satisfaction without much difficulties.

bnr17

Jessica Miller
Jessica is the Lead Author & Editor of UsabilityLab Blog. Jessica writes for the UsabilityLab blog to create a source for news and discussion about some of the issues, challenges, news, and ideas relating to usability.
Jessica Miller on sabtwitterJessica Miller on sablinkedinJessica Miller on sabgoogleJessica Miller on sabfacebook