Prosthetic leg uses AI to adjust to different terrains 

It can tell the difference between grass and cement and adjust accordingly.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

For a person with a lower limb amputation, walking with even the most basic prosthetic leg is typically easier than walking without it. However, walking up stairs or across uneven terrain with a passive lower-limb prosthetic can be incredibly challenging.

Robotic prosthetics, with powered joints, can help overcome those challenges, while artificial intelligence (AI) can take artificial limbs one step further, giving them the ability to sense what a wearer is about to do.

Researchers from the University of Michigan unveiled an AI-powered prosthetic leg in 2019 that could sense the contractions in its wearer’s muscles to know if they planned to start walking up stairs or down a ramp.

Now, a team from North Carolina State University has developed a computer vision system that gives a prosthetic leg the ability to not only “see” what’s ahead, but also calculate its level of certainty in that prediction.

A Prosthetic Leg with Computer Vision

For their study, published in the journal IEEE Transactions on Automation Science and Engineering, the NC State researchers taught an AI to see the difference between six types of terrain: tile, brick, concrete, grass, “upstairs,” and “downstairs.”

To train the AI to predict where it was headed, they walked around both inside and outdoors, while wearing cameras mounted on eyeglasses and on their own legs.

“We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive,” researcher Helen Huang said in a news release.

“However, we also found that using only the camera mounted on the lower limb worked pretty well — particularly for near-term predictions, such as what the terrain would be like for the next step or two,” she continued.

The team designed their system to work with existing prosthetics — just add a camera. While they’ve yet to actually test it on a robotic prosthetic leg, they plan to do that next, as well as refining the system.

“We’re planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing,” researcher Boxuan Zhong said.

Factoring in Uncertainty

While a computer vision system that can predict what’s ahead of a prosthetic leg wearer would be impressive on its own, the NC State researchers gave their AI an extra ability: it makes a prediction, then calculates its level of certainty in that prediction and uses that to decide how to adjust its behavior.

“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision — it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” Zhong said.

The researchers believe this ability to factor in uncertainty could make their AI useful for applications far beyond prosthetics.

“We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty, in a way that allows the system to incorporate uncertainty into its decision making,” researcher Edgar Lobaton said. “This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
Moral Machine
Subscribe to Freethink for more great stories