Last night for some reason the new version of BBC iPlayer suddenly appeared on the Apps screen on my Nexus7. The device secretly updates itself all the time so i wasn’t that surprised.
Did i tap the icon with a sense of a world of wonderful BBC radio about to grace my ears? No, because Up until now even the mention of these Apps raised my blood pressure into the red and i avoided tapping them for my own sanity and i’m not joking.
Anyway, 6 months of non-BBC interaction via my Nexus7 gave way to a momentary impulse to tap the icon and just see if anything had improved…
Aha! Can i believe it??? I’ve got to believe it! Two minutes later and i was listening to Costing the Earth on listen again and i actually felt like i had positively interacted with the controls, as simulated into my head a mental map of the entire visual layout and got enough positive reinforcement from my finger movement and hitting the objects i expected to hit, that finally a wonderful harmony rang out between my mental map and the device and the BBC iPlayer.
Harmony – it’s the user experience that tells me the interface is working – now i want more!
This so called “ring that scans and reads text” might prove to be a lovely way for me and others to return to reading paper books again, but I do wonder why the press release ignores the dawning of the text-to-speech enabled eBook? It does the same thing without having to concentrate on guiding a little camera along a line of text and keeping straight with buzzing fingertips!
For me being able to read all those bits of paper that are given to me by public services, the NHS, commercial services, would be far more valuable and liberating.
I had no idea just how much a pair of leather and metal headphones can radically enhance user experience!
I spend 8 hours + per day inside headphones and I realised my ears needed a sofa not a plastic chair! I can really say this pair of Bower & Wilkins P5′s are really working for me.
If my computing life can be likened to a lounge, it just feels easier to relax now nestled in my leather ear-sofa, I care less about the moody devices that I cannot control!
p.s. if anyone needs reminding, I listen to screen readers talking at me all day I don’t get time to listen to music much. This posting is about comfort, long term weraability and how this can modify one’s experience of using devices eyes-free.
Just spent 15 minutes trying to turn off attachments preview in MS Outlook 2010.
My sighted office colleague and I struggled to make sense of the way the ribbon and menus worked. I have never come across such a seemingly crazy set up as this before!
I found I needed to use every keyboard navigation trick I’ve ever learned –tree expanding and collapsing, descending list, horizontal list, tab panel, combo drop down, and I’m sure there was even a combo pop up explode collapse disappear thing in there too– a real mish mash. I just could not form a mental map of what I was doing at all and without this, I was geographically lost – it’s official!
It’s made me think about “audible architecture” something I am familiar with when I’m working with urban street design people, but does it exist in software interface design?
A web search reveals all sorts of software architects and I guess someone did “design” the way the Outlook 2010 menus work, but it sure doesn’t strike me that there is any audible design logic in there, or if there is, it doesn’t make sense to me!
if the designers have invented / ended up with yet another audible contortion tangle around a visually optimised design approach, then I will happily direct them to the nearest purveyor of straitjackets and invite them to try out living their life with that on and see if they think that is inclusion…
In front of me now is a Windows laptop, an iPhone, a wireless Apple keyboard, a Nokia phone and a Nexus 7 tablet.
Way too much of my brain power is being spent on switching between the different ways each device wants to be controlled and not actually on achieving the things I want to achieve with them.
For example, I’m writing one email on my Nexus and working on a document on my iPhone. swiping up and down on each device does something radically different. So does swiping right and left. On the iPhone to toggle reading mode I have to use a two finger rotor action but on the Nexus it’s a single finger vertical up and down swipe. To scroll pages on the iPhone it’s a three finger drag and on the Nexus it’s a two finger drag, or is it, I’ve forgotten. There are also loads of weird L shape gestures on the Nexus which do pre-set things but I can never remember them long enough to use them. In short this is a finger tangling brain bending nightmare.
The only gestures I use on my laptop are hitting it occasionally when it freezes, that and really hard typing when I’m frustrated. sometimes I wish the keys were pressure sensitive and it would do capital letters when I gave the keys some welly, which would make shouty emails even more fun to write, but I’ve not heard of this on any system spec out there.
On top of this there are different keyboard layouts especially for symbols on both hardware and virtual keyboards, totally different folder structures, control panels, settings, etc. etc.
And for us non-screen based computer users combine the above with literally hundreds of keyboard shortcuts for using the Jaws screen reader across different applications, voice Over is different again, and so is Talk Back, and so is Talks on the Nokia.
Errrr I’m sure all the commercial companies competing out there will be fine about standardising their interfaces with each other…
Moral of this story? Just use one device or get a bigger brain or maybe someone will come up with a Babel glove! (like a Babel fish but for the hands) What’s a Babel fish then?
Two months hard labour with an iPhone and I’m through the pain barrier!
Main gain is faster interaction because the interface enables me to use my geographic memory skills to a far higher level than a keyboard and tabbed interface lets me.
I think of these touch screen talking interfaces as three dimensional (X Y and Time).
The pain I went through was caused by the step change in manual dexterity skill in my hands and fingers, which had to become automatic before the actual process of operating the interface and linking it all up with my geographic memory skills, all became transparent.
Going through this has made me realise how different the time dimension is when one is using an interface eyes-free and using the interface by eye. This will be obvious to anyone reading this, but what I think could be a new angle on this is the contrast between the cognitive processes going on in a blind user compared with that of a sighted user in any one moment of time.
This matters for people performing user testing for example but whether a cognitive processes comparison is made during these side-by-side testing processes of different user groups is made I don’t know.
Take for example that first moment of picking up an iPhone from two different user perspectives. User A (voice over user) and user B (average screen user). Let’s assume neither user has a memory of what that screen contains at this stage.
User A is exploring by touch, hears voice over reading out icon labels wherever the finger contacts the screen -App store, clock, Game Centre- resulting from touching roughly in the centre and sliding the finger up and to the right, or -Messages, Calendar, Photos- if you start from top left and swipe to move in a standard reading order. It takes a while to read over every icon displayed that is for sure! But there is a positive consequence of this negative, one will prioritise the mental processes necessary to short cut these delays and the advantage of the X Y layout is there is ample potential for using memory skills to speed things up.
The cognitive process to generate a mental model of the screen geography for user A, is akin to how a sighted person will discover a picture as they reveal it on a scratch card, or build up an idea of a landscape by scanning across it with a telescope until they feel they’ve seen enough and felt the movement of the telescope, to build up the image, which they never saw “in a oner”.
For user B they are seeing the entire screen at once. In the same time it takes for user a to hear three icons spoken, user B recognises the overall geography of the display. But this isn’t necessarily an advantage! Their next moment they might be drawn to certain graphical designs over others or they may be trying to ignore the little graphics (some of which are quite random like the Messages icon which is a green box with a cloud thing in it) and instead focus on reading the text associated with each icon. Unfortunately for them, they will see the graphics as well as the text, so their cognitive load will react to this, whether they like it or not, and if anyone measures the impact of icons as positives or negatives is not known to me as I write this. I’m describing a completely different negative here, and perhaps user B is more likely to prioritise the development of mental strategies that let them filter out unnecessary visual details, so they can speed up their interaction / or reduce the cognitive loading.
Now I know both above examples are picked from many possible reactions to the home screen, but I hope they stand to reason and illustrate how the cognitive process for user B is distinctly differently loaded compared with user A.
To conclude this post, , the more I have used the iPhone, the more I have gained a physical geographic memory of where items are located, and the faster and faster I am able to _go straight_ to the item I want in any given moment.
This has resulted in much faster eyes-free interaction with a touch screen computer compared to operating a GUI based computer by keyboard only, because I no longer need to tab through a list of options strung out in time, and suffer the decisions that the designer made when deciding what the tab order should be.
My geographic memory abilities are therefore brought into play by the X Y touch screen interface approach.
I think a side effect of being able to use my geographic memory skills is a regular burst of satisfaction every time I hit or get near the item I was aiming for. This happens constantly and could explain why blind people evangelise the iPhone so much when they get passed the pain barrier in adapting to use it.
As a final note, I know everyone has a pain barrier to get through and I witness my 69 year old mother-in-law with no sight difficulties struggling to gain the hang of an iPhone, but I think it is much harder to use an iPhone in the finger constantly stuck to the screen way that blind people have to use (reduces when memory of layout increases though but it’s chicken and egg) compared to sighted people who use the eye to survey and eye-finger coordination. The latter skill is very likely to be an already highly practiced skill for anyone who writes with a pen or presses little buttons, so this is another thing that blind people won’t have ready to deploy and will need to develop, which could explain the hardness of the pain barrier.
Oh and for anyone who read my posting last year->
“I like my Nokia Smartphone with all its buttons but should I stay or should I go iPhone?”
I have stayed with my Nokia C5 because it is so much more effective as a mobile phone and on-the-move device, compared to the slippery slab of talking glass (which is really what I perceive my iPhone to be) and this is because I can feel when my finger is on the call button on my Nokia without having to listen to the device telling me that my finger is on the call button! In a noisy environment and when I only have one hand to use the device, this really matters.
I am however using my iPhone combined with an Apple keyboard as my new laptop set up… iPhone on the right so I can look around the screen with my finger and keyboard in the middle for when I have to type.
*I have also discovered lots of interesting special key functions on the keyboard too which I could not find documented anywhere and that let me click icons and fields without having to touch the screen…
Hello reader yes and before you laugh and say how naïve this sounds let me just say that the bit of government I’m commenting on here is supposed to be getting people back to work!
Yesterday I came out of a high level meeting with my ears ringing with the mantra “our computer system is automated”.
This automated client handling system will get used by virtually everybody in the entire population at one time or another, and it is currently unable to serve anyone who cannot read the ordinary sized print that it spews out on grey recycled paper!
This might have been ok if everybody can read printed communications, but hundreds of thousands cannot.
They just kept saying that it was the system, every time I pursued a point we circled back to it not being possible because of the system, the system, the system….
All my attempts to charm and persuade the civil servants to acknowledge that any “system” is built by people, i.e. them, unfortunately failed (it doesn’t build itself as that’s what led to the problems in the Terminator films)…
Nor did any of my attempts to get them to agree that a system built for use by the general public should cater for the most common communications variants known to exist within that general public.
It seems to me that they have a policy that if you cannot read the standard document their system spews out, then you aren’t a person. Is this 2013 or 1984?
If you’re interested to know what you are if you’re not a person, well you’re a “reasonable adjustment”.
Thanks for reading this Person,
Reasonable Adjustment 12436706