AI backpack “sees” for visually impaired people

It warns them about any potential obstacles in their path.

As an AI engineer, Jagadish Mahendran has spent a lot of time trying to help robots “see” the world around them.

Now, he’s doing the same for visually impaired people.

Using technology developed by Intel, Mahendran has created a voice-activated AI backpack that guides people with visual impairments in outdoor environments — and the system costs just $800, compared to thousands for some smart glasses.

A Better Assistive Technology

An estimated 285 million people are visually impaired, meaning they have vision problems that can’t be corrected with glasses. Of that group, 39 million are blind.

For blind people, navigating an outdoor environment can be both difficult and potentially dangerous — they may have trouble safely crossing the street or knowing when they need to step up onto a curb.

Guide dogs can help in these situations, but they can be expensive (and some people are allergic). White canes, meanwhile, won’t help them avoid overhead hazards, such as hanging tree branches.

There are other assistive technology devices to help with navigation, but they aren’t always ideal.

Voice-assisted smartphone apps can give visually impaired people turn-by-turn directions, but they can’t help them avoid obstacles.

Smart glasses usually cost thousands of dollars, while smart canes require a person to dedicate one hand to the tech — not great if they’re, say, trying to carry groceries home from the store.

Mahendran’s AI backpack, MIRA, hopes to be the perfect alternative.

“When I met my visually impaired friend, Breean Cox, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help,” he said. “This motivated me to build the visual assistance system.”

Building an AI Backpack

Before jumping into development on the AI backpack, Mahendran and his collaborators interviewed several people with visual impairments to ensure the device would address the challenges they faced.

Armed with those insights, they developed a system consisting of a small backpack, a vest, and a fanny pack.

A $300 Luxonis OAK-D spatial AI camera contains the Intel computer vision tech that serves as MIRA’s “brains.”

To train the camera’s AI to identify curbs, crosswalks, and other objects, the researchers fed it images from existing databases, as well as some they took and labeled themselves.

After training, they mounted the camera in the vest and connected it to a computing device inside the backpack — this could be anything from a laptop to a Raspberry Pi.

A GPS mounted on top of the backpack also connects to the computer, and the battery powering the whole system goes in the fanny pack.

A Bluetooth-enabled earpiece lets the wearer communicate with the AI backpack.

They can give it commands, such as “Describe,” which prompts the AI to list nearby objects along with their clock positions (e.g., “Stop sign at 2 o’clock”).

They can also hear the AI’s automatic warnings about potential dangers — to let them know a hanging branch is straight ahead, for example, the AI will say “Top, front.”

MIRA can run for eight hours on a single charge and is designed to blend in.

The Future of MIRA

Mahendran told Freethink that the AI backpack prototype cost about $800 — that’s already thousands less than most smart glasses, but he and his team are working to get the cost down even further.

They plan to publish a research paper on the system in the near future and will make everything they develop for the project — the code, datasets, etc. — open source.

We are only limited by the imagination of the developer community.


Hema Chamraj

Right now, they’re raising funds for testing and looking for more volunteers to help them reach their ultimate goal of providing visually impaired people with an open-source, AI-based assistance system for free.

“It’s incredible to see a developer take Intel’s AI technology for the edge and quickly build a solution to make their friend’s life easier,” Hema Chamraj, Intel’s director of technology advocacy and AI4Good, said.

“The technology exists; we are only limited by the imagination of the developer community.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Meet the humanoids: 8 robots ready to revolutionize work
Everything you need to know about the humanoids that will soon enter the workforce — or are in it already.
How to build the skills needed for the age of AI
Knowledge-based workers already need to skill-up to coexist with sophisticated artificial intelligence technologies.
See how Moderna is using OpenAI tech across its workforce
A partnership between Moderna and OpenAI provides a real-world example of what can happen when a company leans into generative AI.
Shining a light on oil fields to make them more sustainable
Sensors and analytics give oil well operators real-time alerts when things go wrong, so they can respond before they become disasters.
OpenAI’s GPT-4 outperforms doctors in another new study
OpenAI’s most powerful AI model, GPT-4, outperformed junior doctors in deciding how to treat patients with eye problems.
Up Next
Digital Twin
Subscribe to Freethink for more great stories