Researchers at the University of Georgia have developed a portable AI engine that can help visually impaired people navigate the world around them.
Housed in a backpack, the system detects road signs, crosswalks, curbs and other common challenges using a camera in a vest jacket.
Users receive audio instructions and advice from a Bluetooth earphone, while a battery in a fanny pack provides approximately eight hours of energy.
Intel, which provided the processing power for the prototype device, says it is superior to other high-tech visual assistance programs, which “lack the depth perception needed to enable independent navigation.”
Scroll down for video
The AI backpack (left) informs the wearer about road signs, cars and other common challenges through a Bluetooth earphone
Jagadish Mahendran, an AI developer at the Institute for Artificial Intelligence at the University of Georgia, was inspired to create the system by a partially sighted friend.
“I was struck by the irony that although I have learned to see robots, there are many people who cannot see and need help,” he said.
Using OpenCV’s Artificial Intelligence Kit, he developed a program that runs on a laptop small enough to store in a backpack, paired with Luxonis OAK-D spatial AI cameras in a vest jacket that provide information about obstacles and depth .
Mahendran and his team trained the AI to recognize different terrains, such as sidewalks and grass, and challenges ranging from cars and bicycles to road signs and low-hanging branches.
The system is powered by a battery pack in a fanny pack (left), while OAK-D spatial AI cameras in the jacket (right) provide obstacle and depth information to the wearer
Messages from the system are delivered via a Bluetooth earpiece, while commands can be given via a connected microphone.
It was critical to make the product relatively light and not too cumbersome, Mahendran said.
Without Intel’s neural computing sticks and Movidius processor, the wearer would have to carry five graphics processing units in the backpack, each weighing a quarter of a pound, plus the added weight of the necessary fans and a larger power source.
“It would be prohibitively expensive and impractical for users,” he said Forbes
Using Intel’s neural computing sticks and Movidius processor, Mahendran was able to reduce the power of five graphics processing units to hardware the size of a USB stick
The AI engine can detect low hanging branches and translate written street signs in verbal directions
But with the added processing power, ‘this massive GPU capacity is compressed into USB stick hardware so you can just plug it in anywhere and use these complex, in-depth learning models … it’s portable, inexpensive and has a very simple form factor. ‘
The invention won top prize at the OpenCV Spatial AI competition 2020 sponsored by Intel.
“It’s incredible to see a developer take Intel’s AI technology for the edge and quickly build a solution to make a friend’s life easier,” said Hema Chamraj, director of Intel’s AI4Good program.
‘The technology exists; we are only limited by the imagination of the developer community. ‘
Although the device is not yet for sale, in a few weeks Mahendran will send his visually impaired friend a device so that he can receive feedback from her real life experiences Mashable reported.
While other high-tech visual assistance programs “lack the depth perception necessary to enable independent navigation,” Mahendran’s invention can tell a user if they are about to reach a curb or slope.
Can it replace a guide dog? Mahendran is convinced his invention is better at communicating specific challenges to a user, but he told Forbes, “You certainly can’t cuddle or play with an AI engine.”
About 285 million people worldwide have a visual impairment, according to the World Health Organization, and technology companies are increasingly investing in providing solutions.
Google has tested Project Guideline, a new app that allows blind people to run independently without a guide dog or human assistant.
The program follows a guideline on a course using the phone’s camera and then sends audio signals to the user through bone conduction headphones.
If a runner strays too far from the center, the sound will get louder on whichever side he chooses.
Still in its prototype phase, Project Guideline was developed during a Google hackathon last year when Thomas Panek, CEO of Guiding Eyes for the Blind, asked developers to design a program that would allow him to jog on his own.
After a few months and a few adjustments, he was able to run laps on an indoor track without assistance.
“It was the first unguided mile I’ve walked in decades,” said Panek.
Last spring, Google unveiled a virtual keyboard that allows visually impaired people to type messages and emails without clunky additional hardware.
Integrated directly into Android, the Talkback Braille Keyboard uses a six-key layout, with each key representing one of six braille dots.
When tapped in the correct order, the keys can type any letter or symbol.
“It’s a fast, easy way to type on your phone without any additional hardware, whether you’re posting on social media, responding to a text message, or writing a short email,” Google said in a blog post in April.