Skip to content

will new technology change the definition of “a cure for blindness” in 2016?

December 21, 2015

This is an update to a previous post, where I grandly proclaimed that medical science is more likely to fix digital accessibility than implementation of the Web Content Accessibility Guidelines.

I still believe this is true but the difference is there is a new dimension explained below. My original idea was based on the problem of biology being less complex to solve than the socio-economic problems from where most digital accessibility barriers are created in the first place, that and the fact that pharma companies can make money out of medical treatments that repair lost eyesight. All meaning the bio-solution has a clearer business case for universally solving digital accessibility barriers than ‘what you should think and do’ guidelines that attempt to change the way people behave in their day-to-day communications practices.

For now the status-quo has remained, because markets arise out of other markets that aren’t working properly. So as one example, the Jaws screen reader is a third party product that rose up out of Microsoft’s inability to build an accessible Windows platform and applications eco-system, so a market for a Jaws type product came into play. Jaws is a well-developed product, but let’s face it, if you had the choice of paying £10,000 to fix your eyes or £1000 to buy Jaws, you would pay to fix your eyes, even if you weren’t allowed to use them for anything else other than interacting with your computer. I say this because seeing the screen and being able to use the mouse pointer normally, is more than 10 times better than having to manage with Jaws, which does not facilitate equal levels of access to the Windows environment. For now, there is no ‘fix your eyes’ product for most blind people out there, yet.

So while disabled people remain disabled by digital barriers, and the problems persist unsolved, there does remain a strong business case for the whole accessibility industries thing: from advocates, consultants, testers and validators to trainers, product designers and developers etc.

The reason why I am posting today is I need to add another dimension to the above idea, and it does change the calculations I’m trying to make above by changing the paradigm. Access technologies like screen readers are just one type of access technology, bound up and ultimately restricted by the very platforms they are trying to make accessible. There are other kinds of access technologies already developed, or in-development, that circumvent or remove the need to go onto websites and use screen readers in the first place.

You might remember a while ago I suggested that my smartphone is almost becoming a talking magnifying glass in my hand, wave it over visual information, or in fact just wave it in the air, and it converts what I’m trying to see (whether this is printed text or looking around my physical surroundings) and it supplies auditory information that I can interact with. I say almost becoming, because the problem is these amazing developments are really glitchy, horribly fiddly to use in the real-world and really don’t match the instant information grab that a seeing person can get with their eyes. Like a sighted person, I do not have enough time in the day to properly manage all the pressures of work, family, health and fitness, finances, social life etc, and I definitely don’t have time spare to take 10 times longer to read a note or check my surroundings, because the technology requires me to fight with a touch screen interface and strain to hear the voice feedback and interpret it.

Anyway, one point I am trying to make here is that a new form of “seeing” is possibly coming about, instead of using one’s eye and retina, the same effect as seeing comes about using equivalent information of the sort easily accessed by a smartphone, specifically Geo-location and camera image processing and matching with known things.

How that is then presented to you in a way you can control is the key thing. Smartphones and operating aps is often not the most appropriate way because this interactive mode does not match the interactive state one is in for that activity. Hence Google Glass does fit the bill for the above types of activities. Yes it hasn’t found its place yet, but I am more and more touching the ordinary glasses I wear in a kind of hopeful way that prepares me to mutter to them “is there a bus stop near me and which way should I turn”, “give me a bleep when the barman is looking in my direction” and “I need to get across this busy shared space please give me walking directions to the best pedestrian crossing and bleep when I’m looking directly at it using the image feed combined with the GPS data”.

At present, none of the aps I have on my phone can directly supply these common daily needs, but a Google Glass type device just might.

So, a new question opens up, will a Google Glass type product come about sooner than the medics can invent ways to repair wrecked retinas…

Will 2016 prove a significant year for answering this question I wonder?

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: