If you train robots like dogs, they learn faster

Instead of needing a month, it mastered new “tricks” in just days with reinforcement learning.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Treats-for-tricks works for training dogs — and apparently AI robots, too.

That’s the takeaway from a new study out of Johns Hopkins, where researchers have developed a new training system that allowed a robot to quickly learn how to do multi-step tasks in the real world — by mimicking the way canines learn new tricks.

Reinforcement Learning

One day, AI robots could clean our homes, care for our elderly, and do all of the dull, dirty, and dangerous jobs we don’t want to do.

But the real world is complicated. Developers will need to train robots to learn on the job — it’d be impossible to program a dish-cleaning robot to recognize every possible dirty dish, for example, but it still needs to know what to do when an unfamiliar one turns up in the sink.

One way developers train AIs is by letting them explore a virtual world and “rewarding” them when they do something right. This technique is called reinforcement learning, and it’s not unlike how we train dogs — they do a trick, they get a treat.

While it can be effective, reinforcement learning can also be time-consuming — the AI might try a lot of things before landing on the reward-worthy trick.

To overcome this limitation, the JHU team developed a new reinforcement learning framework they call Schedule for Positive Task (SPOT).

“The question here was how do we get the robot to learn a skill?” lead author Andrew Hundt said in a press release. “I’ve had dogs so I know rewards work and that was the inspiration for how I designed the learning algorithm.”

See SPOT Stack

In the SPOT framework, the robot’s “reward” isn’t a tasty treat but numerical points. The “trick,” meanwhile, is stacking multiple blocks on top of one another.

One way to speed up training time, the researchers discovered, was to reward their AI for doing “sub tasks.” These would be equivalent to trying to train a dog to sit and giving it a treat if it starts to lower its rear — the dog didn’t do exactly what you wanted, but it’s on the right path.

It used to take a month to achieve 100% accuracy. We were able to do it in two days.


Andrew Hundt

It also helped if the AI lost points for doing something that negated its previous progress, like knocking over the blocks after stacking them — this is called “progress reversal.”

They also coded some common sense into the AI, pre-programming it with intuitions to avoid wasting time on dead ends and recognize what it was supposed to do more quickly.

“(G)rasping at thin air isn’t worth a robot’s time, but (since) robots learn through trial and error, they would not typically have this intuition, until now,” Hundt told Freethink. “We have developed a practical way for the robot to incorporate this common sense knowledge into a safety check, which skips the actions which are definitely not worth trying.”

The Future of the SPOT Framework

In total, their framework allowed them to train an actual robot — not just an AI in a virtual world — to accurately complete multi-step tasks much faster than another common reinforcement learning method.

“(The robot) quickly learns the right behavior to get the best reward,” Hundt said in the press release. “In fact, it used to take a month of practice for the robot to achieve 100% accuracy. We were able to do it in two days.”

His hope is that the SPOT framework might one day help AI developers train robots to do things far more complicated than stacking blocks.

“We believe that with further development, this technology has the potential to change a variety of industries for the better, from home care and surgery to warehousing and even self-driving cars,” he told Freethink.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
Diabolical Ironclad Beetle
Subscribe to Freethink for more great stories