These robot legs taught themselves to walk

A new twist on reinforcement learning could make it easier to train tomorrow’s robots.

“Cassie,” a bot made by Agility Robotics, is essentially a pair of robot legs.

But Cassie has taught itself to walk — thanks to UC Berkeley researchers’ unique twist on reinforcement learning.

Why it matters: Legged robots are better at navigating tough terrain than their wheeled counterparts.

That gives them countless applications — from search and rescue to off-world exploration — and this new technique could make it easier to train the robots for any of those tasks.

Treat for trick: Reinforcement learning is a commonly used technique for training AI robots to walk.

Rather than giving an AI control over a robot right away — and risking it leading the expensive equipment right down a set of stairs — researchers will create a virtual environment designed to mimic the physics of the real one.

Their AI version will then learn to walk in that environment through a process of trial and error. It receives a reward for desired actions, and a penalty when it does something wrong.

From that feedback, the AI eventually masters walking in the simulation — and then it’s given control of an actual robot.

The challenge: It’s impossible to perfectly mimic the real world in a simulation, and even tiny differences between the virtual world and the real one can affect the robot’s performance.

That means researchers must often manually adjust their AI once it’s already reached the robot stage — which can be a time-consuming process.

Doubling up: Rather than letting the AI powering their robot legs learn to walk in one simulation, the Berkeley team used two virtual environments.

In the first, the AI learned to walk by trying out different actions from a large, pre-programmed library of robot movements. During this training, the dynamics of the simulated environment would change randomly — sometimes the AI would experience less ground friction or find itself tasked with carrying a load.

The robot legs could walk across slippery terrain, carry loads, and even recover when shoved.

This technique, called “domain randomization,” was incorporated into the training to help the AI think on its feet once it encountered the sometimes-unpredictable real world.

In the second environment, the AI tested out what it learned in a simulation that very closely mimicked the physics of the real world.

The accuracy of this simulation was only possible by sacrificing processing speed — it would’ve taken too long for a robot to learn how to walk in it, but it did serve as a useful testing ground before making the leap to the real world.

After that, the AI was given control over the robot legs and had very little trouble using them. It could walk across slippery terrain, carry loads, and even recover when shoved — all without any extra adjustments from the researchers.

First steps: The robot legs will need more training before they can have any real use outside the research lab. The Berkeley team now plans to see if they can replicate the bot’s smooth sim-to-real transfer with more dynamic and agile behaviors.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Boston Dynamics retires dancing Atlas robot — and debuts its electric replacement
A day after retiring its hydraulic Atlas robot, Boston Dynamics released a video debuting its all-electric, workplace-ready replacement.
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
Up Next
lost art
Subscribe to Freethink for more great stories