Life

This App Uses AI To Help Blind People Experience The World Around Them

by JR Thorpe

When it comes to the possibilities of technology, scientists and engineers are less interested in futuristic flying cars, and more intrigued by how they can improve the human experience. One key focus for innovation that’s recently gained more visibility is how technological advancements can help the disabled community. Among a number of technologies and inventions that seek to make the world more accessible for the disabled, one app, Aipoly, is using AI to show blind people the world around them — and it’s an using unprecedentedly smart network to do it.

Technologies have been invented to help disabled people for centuries, from Braille for the blind (invented in the 1820s by Louis Braille) to hearing aides for the deaf (popularized by hearing trumpets in the 1800s before evolving into cochlear implants in the 20th century). It’s no surprise that human innovations should target people with different capabilities, but the trappings of the 21st century, from artificial intelligence systems to smartphones and 3D printing, are bringing the relationship between advancements in tech and the needs of the disabled community to new heights. But are apps and other products like Aipoly truly going to change the way disabled people interact with the world?

Aipoly, which debuted in 2015, has garnered attention recently because it won the Innovation Award at the Consumer Technology Association Awards of 2017, for “outstanding product design and engineering.” But it’s been making headlines for several years because of its simple premise, based on deeply complex technology: download it to your smartphone, point the smartphone at an object, and the app will tell you what the object is.

This might seem like child’s play, but it’s actually the product of artificial intelligence based on the same machine-learning technologies that are behind androids and robotics. Humans recognize things naturally by identifying them visually, aurally, and through many other means. We learn what new things look like with great ease, and can remember their identification and classify other things that look similar, but aren’t. Creating that same ability in computers is incredibly difficult. Recognition technology is already commonly available in places like Facebook’s photo-tagging service, which attempts to pick out face shapes from photographs. But the Aipoly system aims much bigger: It attempts to recognize and immediately identify objects when the phone “sees” them, and tell the person holding the phone what they are, regardless of their surroundings. If pointed at an orange in a pile of apples, it will identify both the apples and the orange — and continue to be able to do it if the orange rolls off, or you pick it up.

Thinking about this for any period will reveal just how tricky that can be. Things look different in different lights as they move or their situations change; many objects that look distinct can in fact have the same classification (like different breeds of dogs — a Great Dane looks very different from a Jack Russell); and picking out objects from visually confusing backgrounds can be hard for the most clear-sighted of people. Nevertheless, Aipoly offers a series of recognition systems, focused on identifying foods, packaging in stores, household objects, people, even particular kinds of plants. They also allow you to customize items, so that Aipoly can identify specific products or things that are relevant to your life. The app was developed by Marita Cheng and Alberto Rizzoli in their time at Silicon Valley think tank Singularity University, and launched in 2015 with a vast “convolutional neural network” that aimed to identify the things in each picture, and the relationships between them. Since then, it’s become more accurate, largely because humans who use it can tell the app what it’s getting wrong and help improve its database.

These kinds of networks are becoming more popular. In 2016, Fast Company reported on the rise of the idea on handheld devices, from Aipoly’s innovation to the iPhone update that uses visual recognition technology to recognize people, places, and even moods in the photos taken on your phone. We’ve had this kind of technology for audio for some time — Shazam, for instance, identifies songs in the environment and can often bring up their lyrics — but collective artificial intelligence for the blind in particular may be transformative in helping them navigate their environments without the aid of others.

There are, however, limits. Human-computer interaction specialist Jeff Bigham told MIT Technology Review that many blind people navigate excellently by touch, and that the real benefit of Aipoly, as yet not fully explored, would come from helping them distinguish between things that feel the same. And Aipoly doesn't react immediately; you still have to wait, however briefly, for it to understand and distinguish what’s in front of you.

Beyond Aipoly, technology has something to offer for many kinds of disability, and many exist on hand-held or miniature devices. From apps that translate deaf speech into text messages, to devices that translate written text into Braille, the world of technological disability aids is vast. Many developers are thinking big in helping people interact with everyday devices; Sesame phone handsets can be controlled through eye tracking by consumers who can’t use their hands, while a HeadMouse Nano can help people track a cursor and control a computer through head movements. Assistive technologies for non-verbal people or those who find it difficult to talk are big business, ranging from small devices and apps to tablets, while people with the often-invisible disability of chronic pain are served by pain-relief wearable technology and special drug-delivery systems. Prosthetic limbs, too, represent a further frontier. We’re now entering an age in which prosthetic limbs might be cheaply and easily accessible thanks to 3D printing, and it’s now feasible to control limbs using conscious thought.

What’s next? The future holds a lot more possibility for the intersection of technology and disability, and some serious questions. For one, as Dr. John Conway, Director of Research at the Royal Agricultural University, noted, visual recognition devices have a way to go when it comes to patterns in maps and diagrams; they’re not yet able to interpret that kind of complex information and describe it aloud. And prosthetics are also still problematic. We’re far from the Tony Stark-modeled bionics of the movies; Patrick McGurrin argued in Slate that many companies appear to be prioritizing new technologies over actual usability, which is a problem if these technologies will be implemented to actually help people. And there’s another issue: as The Atlantic noted in 2015, focusing on technology that helps disabled people function more effectively (like 'exoskeletons' to help those with movement disabilities get around) can distract from more fundamental issues, like changing city infrastructures to make them more disability-friendly, or shifting public attitudes towards disability itself.

So technology may be helping to make the physical and digital worlds an easier place for disabled people, but it’s important not to focus on “fixes” at the expense of understanding the complicated nature of the disabled experience. Aipoly, with its complexity and difficulties, shows just how tricky that can be.