Skip to content

Do Androids dream of accessible phones?

Ok, it’s happened, my Nokia C5 has finally exited my pocket and the slightly larger heavier moto-E phone has replaced it. My transition has been long and slow and this has I think reflected the neurological adaptation that has had to take place in my brain. Simple as that.

Pros: with Moto-E held in an ergonomic position in left hand and right fingertip sliding around the slippy screen, and with “experimental single tap” mode on, and with my frustration shield up (i.e. I don’t expect Talk Back to work smoothly) this phone works, for ringing people, reading my text SMS threads, sending Texts, email and Gmail, messy and jittery web browsing, and experimenting with Aps.

For a person who can cope with a phone that won’t answer calls roughly 20% of the time because the slide right to answer gesture doesn’t always work (double finger tap on my previous iPhone didn’t either by the way), and doesn’t mind having to physically stop walking on the street in order to do anything with the phone whether reading or messaging or using orientation Aps or searching for an address, the sorts of things that a handheld device is most useful for when out and about, making this a mobile device providing you aren’t actually walking and trying to operate it at the same time, then this phone hits the yes button!

Cons: unlike my Nokia C5, this type of touch screen phone cannot usefully facilitate answering calls, reading and writing texts, reading web pages and instantly getting and refreshing bus departure boards on TFL etc when I’m actually walking along with my white cane and navigating the pavement. Strange as it might sound, I got used to speeding out of my office and down to the bus stop and pulling up and checking live bus departure boards whilst striding along, within safety margins. This is one of the real downsides for me with the whole slippy touch screen user interface, and this applies to all devices of this type, not just Android.

I’m not going to write loads in this posting, but I can’t finish without inevitably commenting on the Apple v Google comparison which is very much a topic of conversation these days, so on this I have to conclude, from 3 intensive weeks living with this Moto-E as my only phone, that what Google have achieved so far for an inclusive society is this
–Google have definitely opened up this Moto-E and I guess all other Android smartphones on the market (at a higher price) so I can get to at least play with it, but this isn’t enough, I need to live with my smartphone not just experiment and play with it. Google’s attention to detail in terms of delivering Talk Back as a smooth, efficient and positive user interaction is demonstrably error prone and really poor in places, and generally feeling like a prototype not a finished product.

Final thought, although for some reason I like the idea of being an Android user, oddly, the effect of my experience with Android and Talk back, is that it’s wetted my appetite for what a smartphone that does enable a more smooth and successful mode of interaction could offer me, so I might upgrade to an Apple device sooner rather than later.

BBC iPlayer – harmony – at last!!

Last night for some reason the new version of BBC iPlayer suddenly appeared on the Apps screen on my Nexus7. The device secretly updates itself all the time so i wasn’t that surprised.

Did i tap the icon with a sense of a world of wonderful BBC radio about to grace my ears? No, because Up until now even the mention of these Apps raised my blood pressure into the red and i avoided tapping them for my own sanity and i’m not joking.

Anyway, 6 months of non-BBC interaction via my Nexus7 gave way to a momentary impulse to tap the icon and just see if anything had improved…

Aha! Can i believe it??? I’ve got to believe it! Two minutes later and i was listening to Costing the Earth on listen again and i actually felt like i had positively interacted with the controls, as simulated into my head a mental map of the entire visual layout and got enough positive reinforcement from my finger movement and hitting the objects i expected to hit, that finally a wonderful harmony rang out between my mental map and the device and the BBC iPlayer.

Harmony – it’s the user experience that tells me the interface is working – now i want more!

My fingertips are buzzing already with this “Fingertip screen reader” but why invent this when eBooks are TTS enabled anyway?

This so called “ring that scans and reads text” might prove to be a lovely way for me and others to return to reading paper books again, but I do wonder why the press release ignores the dawning of the text-to-speech enabled eBook? It does the same thing without having to concentrate on guiding a little camera along a line of text and keeping straight with buzzing fingertips!

For me being able to read all those bits of paper that are given to me by public services, the NHS, commercial services, would be far more valuable and liberating.

Ring reads aloud info page

Lounge computing: just add leather and metal to enhance user experience even if the devices still behave like teenagers

I had no idea just how much a pair of leather and metal headphones can radically enhance user experience!

I spend 8 hours + per day inside headphones and I realised my ears needed a sofa not a plastic chair! I can really say this pair of Bower & Wilkins P5′s are really working for me.

If my computing life can be likened to a lounge, it just feels easier to relax now nestled in my leather ear-sofa, I care less about the moody devices that I cannot control!

p.s. if anyone needs reminding, I listen to screen readers talking at me all day I don’t get time to listen to music much. This posting is about comfort, long term weraability and how this can modify one’s experience of using devices eyes-free.

Getting lost in MS Outlook 2010 is easy

Just spent 15 minutes trying to turn off attachments preview in MS Outlook 2010.

My sighted office colleague and I struggled to make sense of the way the ribbon and menus worked. I have never come across such a seemingly crazy set up as this before!

I found I needed to use every keyboard navigation trick I’ve ever learned –tree expanding and collapsing, descending list, horizontal list, tab panel, combo drop down, and I’m sure there was even a combo pop up explode collapse disappear thing in there too– a real mish mash. I just could not form a mental map of what I was doing at all and without this, I was geographically lost – it’s official!

It’s made me think about “audible architecture” something I am familiar with when I’m working with urban street design people, but does it exist in software interface design?

A web search reveals all sorts of software architects and I guess someone did “design” the way the Outlook 2010 menus work, but it sure doesn’t strike me that there is any audible design logic in there, or if there is, it doesn’t make sense to me!

if the designers have invented / ended up with yet another audible contortion tangle around a visually optimised design approach, then I will happily direct them to the nearest purveyor of straitjackets and invite them to try out living their life with that on and see if they think that is inclusion…

Different gestures different standards – I need a Babel glove – my brain is hurting

In front of me now is a Windows laptop, an iPhone, a wireless Apple keyboard, a Nokia phone and a Nexus 7 tablet.

Way too much of my brain power is being spent on switching between the different ways each device wants to be controlled and not actually on achieving the things I want to achieve with them.

For example, I’m writing one email on my Nexus and working on a document on my iPhone. swiping up and down on each device does something radically different. So does swiping right and left. On the iPhone to toggle reading mode I have to use a two finger rotor action but on the Nexus it’s a single finger vertical up and down swipe. To scroll pages on the iPhone it’s a three finger drag and on the Nexus it’s a two finger drag, or is it, I’ve forgotten. There are also loads of weird L shape gestures on the Nexus which do pre-set things but I can never remember them long enough to use them. In short this is a finger tangling brain bending nightmare.

The only gestures I use on my laptop are hitting it occasionally when it freezes, that and really hard typing when I’m frustrated. sometimes I wish the keys were pressure sensitive and it would do capital letters when I gave the keys some welly, which would make shouty emails even more fun to write, but I’ve not heard of this on any system spec out there.

On top of this there are different keyboard layouts especially for symbols on both hardware and virtual keyboards, totally different folder structures, control panels, settings, etc. etc.

And for us non-screen based computer users combine the above with literally hundreds of keyboard shortcuts for using the Jaws screen reader across different applications, voice Over is different again, and so is Talk Back, and so is Talks on the Nokia.

Errrr I’m sure all the commercial companies competing out there will be fine about standardising their interfaces with each other…

Moral of this story? Just use one device or get a bigger brain or maybe someone will come up with a Babel glove! (like a Babel fish but for the hands) What’s a Babel fish then?

Discovery: eyes-free interaction with GUI is clunky and slow but speeds up with touch

Two months hard labour with an iPhone and I’m through the pain barrier!

Main gain is faster interaction because the interface enables me to use my geographic memory skills to a far higher level than a keyboard and tabbed interface lets me.

I think of these touch screen talking interfaces as three dimensional (X Y and Time).

The pain I went through was caused by the step change in manual dexterity skill in my hands and fingers, which had to become automatic before the actual process of operating the interface and linking it all up with my geographic memory skills, all became transparent.

Going through this has made me realise how different the time dimension is when one is using an interface eyes-free and using the interface by eye. This will be obvious to anyone reading this, but what I think could be a new angle on this is the contrast between the cognitive processes going on in a blind user compared with that of a sighted user in any one moment of time.

This matters for people performing user testing for example but whether a cognitive processes comparison is made during these side-by-side testing processes of different user groups is made I don’t know.

Take for example that first moment of picking up an iPhone from two different user perspectives. User A (voice over user) and user B (average screen user). Let’s assume neither user has a memory of what that screen contains at this stage.

User A is exploring by touch, hears voice over reading out icon labels wherever the finger contacts the screen -App store, clock, Game Centre- resulting from touching roughly in the centre and sliding the finger up and to the right, or -Messages, Calendar, Photos- if you start from top left and swipe to move in a standard reading order. It takes a while to read over every icon displayed that is for sure! But there is a positive consequence of this negative, one will prioritise the mental processes necessary to short cut these delays and the advantage of the X Y layout is there is ample potential for using memory skills to speed things up.

The cognitive process to generate a mental model of the screen geography for user A, is akin to how a sighted person will discover a picture as they reveal it on a scratch card, or build up an idea of a landscape by scanning across it with a telescope until they feel they’ve seen enough and felt the movement of the telescope, to build up the image, which they never saw “in a oner”.

For user B they are seeing the entire screen at once. In the same time it takes for user a to hear three icons spoken, user B recognises the overall geography of the display. But this isn’t necessarily an advantage! Their next moment they might be drawn to certain graphical designs over others or they may be trying to ignore the little graphics (some of which are quite random like the Messages icon which is a green box with a cloud thing in it) and instead focus on reading the text associated with each icon. Unfortunately for them, they will see the graphics as well as the text, so their cognitive load will react to this, whether they like it or not, and if anyone measures the impact of icons as positives or negatives is not known to me as I write this. I’m describing a completely different negative here, and perhaps user B is more likely to prioritise the development of mental strategies that let them filter out unnecessary visual details, so they can speed up their interaction / or reduce the cognitive loading.

Now I know both above examples are picked from many possible reactions to the home screen, but I hope they stand to reason and illustrate how the cognitive process for user B is distinctly differently loaded compared with user A.

To conclude this post, , the more I have used the iPhone, the more I have gained a physical geographic memory of where items are located, and the faster and faster I am able to _go straight_ to the item I want in any given moment.

This has resulted in much faster eyes-free interaction with a touch screen computer compared to operating a GUI based computer by keyboard only, because I no longer need to tab through a list of options strung out in time, and suffer the decisions that the designer made when deciding what the tab order should be.

My geographic memory abilities are therefore brought into play by the X Y touch screen interface approach.

I think a side effect of being able to use my geographic memory skills is a regular burst of satisfaction every time I hit or get near the item I was aiming for. This happens constantly and could explain why blind people evangelise the iPhone so much when they get passed the pain barrier in adapting to use it.

As a final note, I know everyone has a pain barrier to get through and I witness my 69 year old mother-in-law with no sight difficulties struggling to gain the hang of an iPhone, but I think it is much harder to use an iPhone in the finger constantly stuck to the screen way that blind people have to use (reduces when memory of layout increases though but it’s chicken and egg) compared to sighted people who use the eye to survey and eye-finger coordination. The latter skill is very likely to be an already highly practiced skill for anyone who writes with a pen or presses little buttons, so this is another thing that blind people won’t have ready to deploy and will need to develop, which could explain the hardness of the pain barrier.

Oh and for anyone who read my posting last year->
“I like my Nokia Smartphone with all its buttons but should I stay or should I go iPhone?”
I have stayed with my Nokia C5 because it is so much more effective as a mobile phone and on-the-move device, compared to the slippery slab of talking glass (which is really what I perceive my iPhone to be) and this is because I can feel when my finger is on the call button on my Nokia without having to listen to the device telling me that my finger is on the call button! In a noisy environment and when I only have one hand to use the device, this really matters.

I am however using my iPhone combined with an Apple keyboard as my new laptop set up… iPhone on the right so I can look around the screen with my finger and keyboard in the middle for when I have to type.

*I have also discovered lots of interesting special key functions on the keyboard too which I could not find documented anywhere and that let me click icons and fields without having to touch the screen…

Follow

Get every new post delivered to your Inbox.