Measuring and Improving Learnability

Well, I know learnability is important, and I know how much we all value measuring every single aspect of anything closely in business. I understand why we’re such sticklers for measurement and documentation, too. Usually, the lack of it is positively disastrous.

But the truth of the matter is, if there’s a real, accurate and practical way to measure learnability, I’ll be damned if I or anyone else knows what it is. Why is that, I hear you asking?

Why You Can’t Exactly Measure This:

Beyond surveying test subjects in a lengthy, resource-draining exercise, you can’t measure something like this in a quantative metric. It’s soft information pertaining to people. People are not measureable in this sense whatsoever.

So, this leaves you with just percentages of demographics who report they found this or that obvious, and this or that confusing and problematic. This data is useful, but there’s a major limit to how useful it really is.

A Silver Lining:

New technology is being worked on that can overcome the horrors involved in any metric or analytic pertaining to people. As businesses, people are our ultimate business, and that’s a bit unfortunate. Most of us know full well that people make everything difficult just by being people.

Onboard systems like WalkMe, which were designed to teach complex tasks one step at a time (safely by preventing mistakes, spotting patterns and prompting the users along), have analytics tools which can outline, in some fashion, how quickly users take to various things, and how much trouble and how many frequent mistakes they made in the process.

But, these analytics need to mature, not on WalkMe’s end, but on the end of UX. Once we start getting real, useful data out of fluffed up human nonsense … what are we going to do with it? Do we have any mutual standards for what this data really means?

I don’t know, I was asking you.

This technology promises a time when you can measure this. But you can’t measure it well right now. All you can really do is that kind of brute force testing, and take measures to promote it.

Promoting It:

Well, there’s a limit to what I can say here too, because this is just now beginning to be taken seriously. I’m not going to pull random suggestions out my backside for you with no founding nor testing to support their viability. I do have my theories, but they’re just that – theories.

However, what we have all mostly discovered is that the best thing to do to promote this is to design the system so that mistakes, out of order actions and so forth aren’t fatal and discouraging.

Any messed up settings or so on need designs backing them that makes returning to default easy. Because, the key to this phenomenon is encouraging experimentation without severe consequences.

I, for example, learned most of the Adobe and Microsoft software I use entirely by poking at it until it did things. I made mistakes, I broke things … but the systems were designed in such a way that all that came of that was having to start over on whatever file I was experimenting with, and no fatal errors came about. That’s the key to improving and promoting learnability.

bnr17

Jessica Miller
Jessica is the Lead Author & Editor of UsabilityLab Blog. Jessica writes for the UsabilityLab blog to create a source for news and discussion about some of the issues, challenges, news, and ideas relating to usability.
Jessica Miller on sabtwitterJessica Miller on sablinkedinJessica Miller on sabgoogleJessica Miller on sabfacebook