Technology – or the misuse of it – dominates the headlines on a regular
basis. In one way or another our confidence in the online world is constantly
under threat, whether from identity fraud, invasion of privacy or even the
ability to print your own gun at home using 3D printers.
Even our brains are changing, according to neuroscientist Professor Susan Greenfield, who voices concerns about “internet addiction disorder” and warns that as a result we are becoming more prone to conditions like autism and dementia.
But there is a silver lining in our digital cloud. Just in time, it appears that technology has an emergent property – that of inclusivity. Mainstream devices are becoming increasingly user-friendly for disabled people as design becomes more inclusive for all of us. The gap between usability and accessibility is narrowing and with it the digital divide between disabled and non-disabled people.
Unsurprisingly, it’s less a case of inherent altruism on the part of developers and manufacturers, and more a matter of consumer demand. Today’s exacting customers expect the products that they buy to be more and more sensitive and responsive to their requirements – whatever they are and wherever they materialise. Indeed, such is the ramping up of both utility and expectation, that we expect our device to tune into our needs and even to anticipate them before they arise.
This desire to use digital technology in whatever form in extreme environments - on the move, with ambient noise or in sunlight - is of huge benefit to the disabled user. Convenience features for the “able-bodied” convert a previously unusable piece of equipment into something enabling and empowering for those with a hearing, vision, motor or cognitive impairment.
What are we if not “disabled” when we want to use our mobile device while driving or jogging, on a noisy factory floor or during a seminar? We’re no longer able to use a traditional keyboard, mouse and screen. We need digital content to be presented in a variety of ways. We want to interact with our devices effectively and easily however limited or impaired we are at that moment by circumstance or location.
Whatever our chosen platform – laptop, tablet, smartphone, or even wearable technology like Google Glass, Nike’s FuelBand or the rumoured Apple iWatch, we want user-friendly interfaces which fit our personal preferences whether controlled by touch, single switch, voice command or gestures.
“Fit for purpose” is a moveable feast, or more precisely an almost limitless banquet, and as devices and interfaces get cleaner, more flexible, leaner and simpler to use, they will inherently serve disabled people more effectively in the process.
The incorporation of speech recognition into standard voice over internet protocols (VoIP) allows the “able-bodied” real-time transcripts of conversations over Skype and other popular VoIP services for the first time. The obvious by-product of this development is that resulting data is indexable and thus part of the searchable web.
For the hearing impaired, this development is a hugely liberating innovation, allowing them to actively participate in the conversation at the same time as the other virtual meeting goers, with real-time subtitles. This is not the only by-product however; just think of the benefits of using services like Google Translate simultaneously to give live translations into a large number of foreign languages.
And it’s not just voice that is soon to become part of the searchable web. While images have been available as search results for many years, the technology relies on filenames or tags to identify them. Thanks to the latest advances, we are now seeing image recognition used to identify everything from text to brands to landmarks, and even individual faces - and we mean “individual” where a person can be quite reliably identified regardless of background, lighting or dress.
Moreover, real-time image recognition, combined with hi-tech gadgets like Google Glass, mean that the live world can be recognised and indexed too. While this is great news for the searchable web, like voice recognition it has even more significant implications for the disabled community.
Imagine a blind person using their smartphone camera (or even wearing Glass) and being informed of what is around them or what they hold in their hand, or someone with dyslexia or learning disabilities looking at text and hearing it spoken back.
Similar combinations of technologies hold great promise for people with a range of disabilities. Emotion recognition software has already proved to be of immense help to autistic people when attempting to identify people’s emotions and is similarly valuable for blind people too. Combine Google Glass with real-time voice recognition, and a deaf person can have live subtitles with them on their heads-up display wherever they go.
The combinations are endless and the technologies maturing at an exponential rate. Such technology can make life easier and more productive for the non-disabled, but can be truly life-changing for those with limiting conditions, and in the process the world is becoming an ever more inclusive place for everyone.
Robin Christopherson (pictured) is head of digital inclusion at disability and e-accessibility charity, AbilityNet.
Even our brains are changing, according to neuroscientist Professor Susan Greenfield, who voices concerns about “internet addiction disorder” and warns that as a result we are becoming more prone to conditions like autism and dementia.
But there is a silver lining in our digital cloud. Just in time, it appears that technology has an emergent property – that of inclusivity. Mainstream devices are becoming increasingly user-friendly for disabled people as design becomes more inclusive for all of us. The gap between usability and accessibility is narrowing and with it the digital divide between disabled and non-disabled people.
Unsurprisingly, it’s less a case of inherent altruism on the part of developers and manufacturers, and more a matter of consumer demand. Today’s exacting customers expect the products that they buy to be more and more sensitive and responsive to their requirements – whatever they are and wherever they materialise. Indeed, such is the ramping up of both utility and expectation, that we expect our device to tune into our needs and even to anticipate them before they arise.
This desire to use digital technology in whatever form in extreme environments - on the move, with ambient noise or in sunlight - is of huge benefit to the disabled user. Convenience features for the “able-bodied” convert a previously unusable piece of equipment into something enabling and empowering for those with a hearing, vision, motor or cognitive impairment.
What are we if not “disabled” when we want to use our mobile device while driving or jogging, on a noisy factory floor or during a seminar? We’re no longer able to use a traditional keyboard, mouse and screen. We need digital content to be presented in a variety of ways. We want to interact with our devices effectively and easily however limited or impaired we are at that moment by circumstance or location.
Technology can make life easier and more productive for the non-disabled, but can be truly life-changing for those with limiting conditions
Whatever our chosen platform – laptop, tablet, smartphone, or even wearable technology like Google Glass, Nike’s FuelBand or the rumoured Apple iWatch, we want user-friendly interfaces which fit our personal preferences whether controlled by touch, single switch, voice command or gestures.
“Fit for purpose” is a moveable feast, or more precisely an almost limitless banquet, and as devices and interfaces get cleaner, more flexible, leaner and simpler to use, they will inherently serve disabled people more effectively in the process.
The incorporation of speech recognition into standard voice over internet protocols (VoIP) allows the “able-bodied” real-time transcripts of conversations over Skype and other popular VoIP services for the first time. The obvious by-product of this development is that resulting data is indexable and thus part of the searchable web.
For the hearing impaired, this development is a hugely liberating innovation, allowing them to actively participate in the conversation at the same time as the other virtual meeting goers, with real-time subtitles. This is not the only by-product however; just think of the benefits of using services like Google Translate simultaneously to give live translations into a large number of foreign languages.
And it’s not just voice that is soon to become part of the searchable web. While images have been available as search results for many years, the technology relies on filenames or tags to identify them. Thanks to the latest advances, we are now seeing image recognition used to identify everything from text to brands to landmarks, and even individual faces - and we mean “individual” where a person can be quite reliably identified regardless of background, lighting or dress.
Moreover, real-time image recognition, combined with hi-tech gadgets like Google Glass, mean that the live world can be recognised and indexed too. While this is great news for the searchable web, like voice recognition it has even more significant implications for the disabled community.
Imagine a blind person using their smartphone camera (or even wearing Glass) and being informed of what is around them or what they hold in their hand, or someone with dyslexia or learning disabilities looking at text and hearing it spoken back.
Similar combinations of technologies hold great promise for people with a range of disabilities. Emotion recognition software has already proved to be of immense help to autistic people when attempting to identify people’s emotions and is similarly valuable for blind people too. Combine Google Glass with real-time voice recognition, and a deaf person can have live subtitles with them on their heads-up display wherever they go.
The combinations are endless and the technologies maturing at an exponential rate. Such technology can make life easier and more productive for the non-disabled, but can be truly life-changing for those with limiting conditions, and in the process the world is becoming an ever more inclusive place for everyone.
Robin Christopherson (pictured) is head of digital inclusion at disability and e-accessibility charity, AbilityNet.
No comments:
Post a Comment