Prosthetic leg uses AI to adjust to different terrains 

It can tell the difference between grass and cement and adjust accordingly.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

For a person with a lower limb amputation, walking with even the most basic prosthetic leg is typically easier than walking without it. However, walking up stairs or across uneven terrain with a passive lower-limb prosthetic can be incredibly challenging.

Robotic prosthetics, with powered joints, can help overcome those challenges, while artificial intelligence (AI) can take artificial limbs one step further, giving them the ability to sense what a wearer is about to do.

Researchers from the University of Michigan unveiled an AI-powered prosthetic leg in 2019 that could sense the contractions in its wearer’s muscles to know if they planned to start walking up stairs or down a ramp.

Now, a team from North Carolina State University has developed a computer vision system that gives a prosthetic leg the ability to not only “see” what’s ahead, but also calculate its level of certainty in that prediction.

A Prosthetic Leg with Computer Vision

For their study, published in the journal IEEE Transactions on Automation Science and Engineering, the NC State researchers taught an AI to see the difference between six types of terrain: tile, brick, concrete, grass, “upstairs,” and “downstairs.”

To train the AI to predict where it was headed, they walked around both inside and outdoors, while wearing cameras mounted on eyeglasses and on their own legs.

“We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive,” researcher Helen Huang said in a news release.

“However, we also found that using only the camera mounted on the lower limb worked pretty well — particularly for near-term predictions, such as what the terrain would be like for the next step or two,” she continued.

The team designed their system to work with existing prosthetics — just add a camera. While they’ve yet to actually test it on a robotic prosthetic leg, they plan to do that next, as well as refining the system.

“We’re planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing,” researcher Boxuan Zhong said.

Factoring in Uncertainty

While a computer vision system that can predict what’s ahead of a prosthetic leg wearer would be impressive on its own, the NC State researchers gave their AI an extra ability: it makes a prediction, then calculates its level of certainty in that prediction and uses that to decide how to adjust its behavior.

“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision — it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” Zhong said.

The researchers believe this ability to factor in uncertainty could make their AI useful for applications far beyond prosthetics.

“We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty, in a way that allows the system to incorporate uncertainty into its decision making,” researcher Edgar Lobaton said. “This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
A personal assistant for everyone: The promise of ambient AI
We’re leaving the app era and entering the age of ambient AI: intelligent help that’s always on, but never in the way.
Gen Z: We must resist the temptation to cheat on everything
Adopting the “cheat on everything” mentality — treating thinking as a burden AI can eliminate — is not only wrong, it’s dangerous.
There are no new ideas in AI — only new datasets
Our next AI breakthrough will come when we unlock a source of data we’ve either overlooked or never fully harnessed.
Albert Einstein said automation caused the Great Depression. It didn’t.
Einstein blamed automation for the widespread unemployment of the Great Depression, but his reasoning was based on a false premise.
Has AI made “learn to code” obsolete?
Freethink talks to the creator of the world’s most popular AI coding assistant to find out whether learning to code is still worthwhile.
Up Next
Moral Machine
Exit mobile version