For the past 10 years Glenda Sims has specialised in the discipline of web accessibility – an area, she acknowledges, that has not always been well regarded among developers. Speaking to Computerworld, she says it is often regarded as just another legal requirement, imposed on them by the Americans with Disabilities Act and similar provisions in other countries.
Sims, who presented at the recent Webstock conference in Wellington, believes her personal motto, “web for everyone, web on everything” has worked its way into general consciousness, as the repertoire of browsers and devices interfacing with the web has mushroomed.
Design for as many devices and environments as possible was a constant thread of discussion at Webstock 2011.
If a developer has to adopt design principles that make it easy to adapt their product to a range of devices, one or more of those can easily be a piece of assisting technology, such as a machine that reads aloud to a blind person, she says.
A phone that doesn’t understand the Flash video format is an example of a machine ‘disability’, which nowadays has to be accommodated as much as a human disability, she believes.
The appropriate starting point is to apply the principles of “universal design” from the beginning of the design and development exercise, Sims says. These principles seek to design items that can provide equal access for everyone, with as little need for specialised adaptation as possible.
Sims has great respect for some companies, such as Apple, which has put a good deal of thought into such design.
“Imagine a blind person using an iPhone, you would think it would be hard,” with no tactile cues from physical buttons, she says. “They fix it to accept gestures.Where you or I would tap the buttons on the screen, a blind person uses a gesture that tells the iPhone ‘read to me what’s on the screen and I’ll pick what I want by swiping two or three fingers’. So often with that sort of design, we are not only opening use of the technology to someone with a disability — but to all of us.
“Take Google. It is blind and it is deaf.” Programming an inanimate collection of machines to achieve a kind of understanding of web content in the absence of eyes and ears, has produced a facility that benefits us all.
If we force a human to be the intermediary all the time, composing, for instance, a verbal description of what’s in a picture, then we will never succeed in adequately indexing the growing world of the web or of real objects she says; it would simply take too much time. Automated detection of visual features or interpretation of the spoken word can drastically reduce the effort.
“As we teach our computers to see and hear for us to help people with disabilities, we make our knowledge more accessible to all humans at any time.”
There is, she agrees, currently a limit to the ability of software to facilitate or even assess accessibility. A written set of standards can only go so far; the best way of assessing the accessibility of a site is to have people with disabilities use it.
But the capability of automated testing tools is increasing, she says. “When I was at the University of Texas, in about 2005, they told me 27 percent of accessibility could be tested automatically and the rest would have to be done manually.”
That percentage has been steadily increasing, she says.
The private company she joined this year, Deque (pronounced DQ) specialises in automated accessibility testing on a large scale.