5 Barriers AI Is Lowering For The Physically And Developmentally Impaired

Ciarán Daly

May 14, 2019

8 Min Read

by Fiona McEvoy

SAN FRANCISCO - Jenny Morris, a disabled feminist and scholar, has argued that the term “disability” should not refer directly to a person’s impairment. Rather, it should be used to identify someone who is disadvantaged by the disabling external factors of a world designed by and for those without disabilities.

Her examples: “My impairment is the fact I can’t walk; my disability is the fact that the bus company only purchases inaccessible buses” or “My impairment is the fact that I can’t speak; my disability is the fact that you won’t take the time and trouble to learn how to communicate with me.”

According to Morris, any denial of opportunity is not simply a result of bodily limitations. It is also down to the attitudinal, social, and environmental barriers facing disabled people.

For those of us without impairment, it can be difficult to understand the complexity of these challenges. Perhaps most tangible are those we can observe, like the way in which our constructed environment can overlook the physical needs of certain individuals – tall buildings without elevators, pedestrian crossings without audio cues, videos without captioning, etc.

Though we all know by now that the technology sector has had a fairly damaging few years from an ethical perspective, it is still important to note that some technologists have been making noteworthy efforts to address such obstacles and open-up previously inaccessible (and difficult-to-access) worlds.

Though social and attitudinal barriers continue to create a disabling effect, here are five areas where new tech and AI is bringing barriers down:

Devices

The clean lines and touch screens of tablets and smartphones are easily among the most popular tech breakthroughs of the last century. However, at first glance these sleek, smooth interfaces look completely inaccessible to blind and partially sighted individuals who rely upon textured surfaces to communicate and consume media.

Thankfully, this is not the case.

Despite the launch of iPhone initially sending the blind community into a tailspin, smartphones have actually enabled people with sight impairments to keep pace with the rest of us when it comes to tech. And all without having to purchase costly or clunky add-ons.  Indeed, advocates for the blind have even said that smart devices could be the biggest assistive aid to come since Braille was invented in the 1820s.

But how so?

Built-in accessibility functions (which many of us use…), like voice control, allow those with vision problems to look things up using search, or to compose texts. Additionally, the visually impaired can use the GPS to navigate, and the camera to determine the denomination of cash. Those who are progressively losing their sight can also increase brightness, invert text color, and enlarge the text characters for better clarity.

Moreover, smartphones now support tens of Bluetooth Braille keyboards, and companies like Ray have developed assistive mobile applications and textured adhesives to allow sophisticated, eye-free device control.

Related: From tapping to talking - three bumps in the road

The internet and social media

Having a functional and operable device is one thing, but it is also important that users can access and communicate via the most popular internet platforms. The alternative could be damaging social exclusion. Fortunately, some really promising work is already being done to level the playing field and to create better accessibility online.

IBM, for example, have leveraged some of Watson’s computing power to create Content Clarifier, an artificial intelligence solution which employs machine learning and NLP to make reading, writing, and comprehending content easier – which is great news for people with autism or intellectual difficulties. The system switches out complicated content and filters away unnecessary detail. So, idioms like “it’s raining cats and dogs” will be converted seamlessly to “it’s raining hard”, helping users avoid confusion (and frustration) when using the internet.

Also, back in 2017, Facebook improved the way in which the blind and visually impaired can use the site. The social media giant revealed that it would be deploying facial recognition technology to identify people, animals, and objects, and use them to annotate photographs with alternative text. Previously, hovering over pictures would only reveal the number of people in them, and prior to that technology would only identify “photo”.

In the same year, YouTube rolled out artificial intelligence that built upon its existing auto-captioning for the deaf. The technology can now identify more complex noises like applause, laughter, and music in videos. Just like Facebook’s facial recognition technology, this also relies upon machine learning, and could ultimately pick out barks, sighs, knocks, etc., improving the service for those with hearing impairments.

Related: AI for sustainability needs to happen - right now

Gaming

This is another area where a lack of dexterity, limb loss/paralysis, or a visual impairment would’ve historically prevented budding gamers from accessing mainstream titles. Now companies like Natural Point are manufacturing special use joypads with built-in head or eye-control to enable a whole new range of users to enjoy video games. The same is true of Quadstick who produce gaming tools for quadriplegics, giving players control via a clever sip/puff sensor.

One-switch games (which are what they sound like) are also available to those with severe disabilities, whilst mainstream gaming giants like Xbox now sell accessible (and reasonably priced!) controls for their systems at Walmart. Oh, and this looks like it could be fun for those with limited upper body or hand mobility who are keen to get stuck into virtual reality.

In-person communication

Though it’s great that the tech industry is acknowledging and addressing accessibility issues from the inside, artificial intelligence also has a key role to play in tackling the disabling barriers “out there” in the real world.

SignAll Technologies, for example, has an exciting product that uses sensors and cameras to track and automatically translate sign language in real-time. At the same time, tremendous work is being done by The Open Voice Factory, who develop electronic speech aids which allow those with speech-language impairments to make themselves understood. Their software “converts communication boards into communication devices”, and they can run on any platform (tablet, smartphone, laptop…). Best of all, it’s completely free!

Related: AI bias isn't a data issue - it's a diversity issue

The physical environment

Perhaps most challenging for someone with a physical impairment, is traversing the many obstacles an outdoor environment presents. Here too though, intelligence-driven technology is making new gains.

Arguably, the most elaborate of recent developments is Hyundai’s exoskeleton, which had its “big reveal” in December 2016 - but they’re no longer the only player in the exoskeleton space. And although we’re unlikely to see them on the streets any time soon, this technology really demonstrates the promise of robotic development when it comes to improving navigation of the physical environment for paraplegics and others who currently require assistance.

In the same vein, another great breakthrough could be self-drive wheelchairs – for which the technology already exists. A self-driving chair uses an existing powerchair as its base, and allows the user to maneuver around without having to physically operate the wheels, which can be stressful and tiring. Many within the disabled community are urging more investment in this technology, and they’re ultimately hoping for a widespread roll-out.

Finally, a number of technologies are also coming onto the market for blind and vision impaired users, some of whom also find themselves at a disadvantage when it comes to getting around safely. One such product is Aira, which uses special glasses and augmented reality to help a (real live) service agent guide a user around an unexpected obstacle. The service agent can access cameras mounted in the user’s glasses and literally act as their eyes in order to give live support.

Of course, this is not an exhaustive list of assistive technologies. Nor is an exhaustive list of the barriers facing disabled people, many of which are – as previously stated – social or attitudinal, rather than merely physical.

Equally importantly, it is not a call for the redemption for the tech industry’s disappointing disregard for ethics over recent years.

This rundown simply highlights and heralds some exciting developments – the likes of which we should be championing and encouraging. They are likely to improve people’s lives by facilitating independence and overcoming several of the exclusionary features of our world.

As we create a new era where the ethics of tech is front-and-center, then we should ensure our focus is not just trained on scrutinizing unethical practices and products, but also considering what good tech can, and should do to ensure – as Jenny Morris says – that anatomy is not destiny.

fullsizerender-1.jpeg

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like