A man paralyzed from the neck down has used two robot arms to cut food and serve himself — a big step in the field of mind-controlled prosthetics.
Robert “Buz” Chmielewski, age 49, has barely been able to move his arms since a surfing accident paralyzed him as a teenager. But in January of 2019, he got renewed hope, when doctors implanted two sets of electrodes in his brain, one in each hemisphere.
The goal was that this brain computer interface would help Chmielewski regain some sensation in his hands, enable him to mentally control two prosthetic arms, and even feel what he is touching.
According to the researchers from Johns Hopkins Medicine (JHM) and Johns Hopkins’ Applied Physics Laboratory (APL), this would be a world first. Until now, this type of research on mind-controlled prosthetics, which relies on a brain computer interface, has focused on a single arm, controlled by one hemisphere of the brain — like the advanced bionic arm Freethink showed you in 2016 (also developed by APL).
“We’re using two sides of the brain to control two limbs at the same time,” Gabriela Cantarero of Johns Hopkins University School of Medicine told Medscape Medical News.
“Being able to control two robotic arms performing a basic activity of daily living — in this case, cutting a pastry and bringing it to the mouth using signals detected from both sides of the brain via implanted electrodes — is a clear step forward to achieve more complex task control directly fed from the brain,” Pablo Celnik, director of physical medicine and rehabilitation at the Johns Hopkins, said in a press release.
A portion of the robotic control is automated with artificial intelligence. The idea is that the signals coming from the brain computer interface will combine with artificial intelligence to control the robot, telling it what object to pick up and how to maneuver it.
It took two years of practice for Chmielewski to master this new skill. Now, he can cut food and feed himself with each arm completing different tasks — even at the same time.
“Our goal is to make activities, such as eating, easy to accomplish by having the robot do one part of the work and leaving the user in charge of the details: which food to eat, where to cut, how big the cut piece should be, and so on,” said APL roboticist David Handelman.
“By combining brain computer interface signals with robotics and artificial intelligence, we allow the user to focus on the parts of the task that matter most.”
Their next goal is to add additional sensory feedback, allowing the user to “feel” if they are completing a task correctly, instead of relying solely on observing it — like how an uninjured person do things like tying their shoes without even looking.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].