We’ve gone over all manner of usability testing approach in recent times, and I’ve made a point of stating that many of them are simply fluff and trend, and that almost all the ones that show up today and lead the boards in discussion, wind up forgotten tomorrow, and we should almost always be grateful for this, because the more convoluted things get in this sort of field, the worse off we are.
It’s no secret that convolution for convolution’s sake is a severe pet peeve of mine, alongside buzzwords. So, when it comes to a usability testing approach, you have to expect me to be a proponent for the most practical, down to earth and not-convoluted one available.
Unfortunately, in this particular case, there’s not really a singular stand alone approach that isn’t a convoluted mess, and the down to earth things that work and aren’t convoluted as all heck, can’t really stand by themselves because they can’t cover all the bases without help.
Sigh.
We’ve already been over this before, so some of you may recognize the hybrid model I’m about to describe. Well, this is more for the benefit of new readers than for returning enthusiasts. Excuse my repeating myself, that said.
So, the common simple approaches are basically overall user experience followed by feedback and discrete testing in a massively parallel form. Alone, neither of these can truly cover all the bases needed, and even together, they still miss a piece of the puzzle.
What is that missing piece? It’s all about benchmarking, friends. Benchmarking, which is testing the performance of a program pushed to its limits, on a given device, is incredibly important. It doesn’t matter if you’re targeting one specific device, a set of models of a specific type, or just targeting just about everything, either way, you must benchmark its performance, because if it doesn’t perform at the speed, accuracy and with the responsiveness necessary, then all the trappings the rest of it tests will be a moot point.
But, along with that, the other two still count closely. You must first have in house testers perform various tasks over and over again, examining the solidity and consistency of behavior as well as the integrity of controls.
After this is done, you need that human factor that clinical testing cannot measure nor account for. It is here that you bring users in, and give them less tedious, more realistic tasks to perform, and then have them answer surveys about their experience, and also encourage them to opine about their experience and their desires that aren’t met.
The trick there is to word your questions wisely, and to incentivize the testing in this in a way that it’s worth the users’ time. So, taking this model of benchmarking repeatedly, integrity testing followed by focus groups is the best way to go.
Forget all the convoluted nonsense and overblown terminology and just roll with this. You’ll be better off for it. Oh, you can use a complicated usability testing approach, but I would strongly recommend against that, because the complexity falsely implied does nothing to help, and is frankly pretentious mostly.