At this point, with computers and other digital services so ubiquitous in the lives of almost everyone, the hurdle of finding a balance between usability and accessibility is at last a serious issue. At one point, accessibility was less of a hurdle than it is today, accessibility being the design of features to facilitate easier interaction by those with physical or mental disabilities or other special needs.
Until recently, while computers and other digital systems were very much part of life, they weren’t quite as deeply embedded as they have become, meaning that accessibility functions didn’t have to be as all-encompassing as they do today. Along with this, one of the bigger accessibility issues, that of poor eyesight (blindness is a specialized accessibility issue even now), was somewhat less of an issue due to smaller devices not being prevalent.
But, in this second decade of the twenty first century, the computer isn’t defined the same way it once was. The desktop PC is still here, and here’s to hoping it’s never replaced (as doing some work and research on tablets and mobiles always will be awful). However, computers now come in all shapes and sizes, such as the aforementioned tablets and mobiles. So, that balance between usability and accessibility is an issue now.
These devices often have awkward interface methods. Some people have become right at home with touch technology, but they are a minority, for example. This isn’t just a problem with the disabled, it’s a human problem of accessibility in general. Humans need tactile feedback, meaning physical buttons and the like. Tapping at a piece of glass is not a natural feeling.
Along with this, these devices also have a limited screen real estate. Most tablets are seven inch devices (larger screens costing a king’s random), and mobile devices average about four inches or less. This means that interacting with designs not designed specifically for mobile interaction are very hard to access.
So, finding that balance, especially with web interaction, has been a difficult thing to do. Voice recognition is being heavily pursued as a solution for how horrible touch typing is, but it has a long way to go.
At this point, the best example of a balance is shown by Google’s mobile version of Chrome. When links or web components are close together in a zoomed out web interface, something interesting happens. A popup section magnifying the region comes up, allowing the user to click whichever component they were after. After this, the magnification leaves, and the browser navigates the desired path readily.
And this is the sort of mentality for balance between accessibility and usability which is going to probably trend in the future. Automation will interleave assistance systems into the design, which pop up when they detect that they are needed. Once they serve their purpose, they will get out of the way, and let normal functionality resume. This is a difference from the current mentality for accessibility, which completely redefines the methods and modes of interface entirely. That results in accessibility having to be enabled (which may itself be difficult), as forcing it to be a standard component kind of breaks the design for users who don’t need this out-of-the-way alteration. For now, no perfect balance for this has been found, but there are, as you see, good signs that new ways of thinking may bring us this balance to some level soon.