TECHNOLOGY
Berkeley shows off accelerated learning that gets robots up and running in minutes – sure naira
Robots that rely on AI to learn a new task generally require a laborious and repetitive training process. Berkeley researchers are trying to simplify and shorten that with an innovative learning technique where the robot fills in the gaps instead of starting from scratch.
The team shared several lines of work with sure naira to showcase at TC Sessions: Robotics today and you can hear about it in the video below — first from Berkeley researcher Stephen James.
“The technique we use is kind of a contrastive learning setup, where the YouTube video is recorded and some areas are patched, and the idea is that the robot then tries to reconstruct that image,” James explained. “It has to understand what could be in those patches and then generate the idea of what could be behind there. It has to get a really good understanding of what’s going on in the world.”
Of course, it doesn’t just learn by watching YouTube, as common as it is in the human world. The operators have to move the robot themselves, either physically or through a VR controller, to give it a general idea of what it’s trying to do. It combines this information with its broader understanding of the world obtained by filling in the video images, and can eventually integrate many other sources as well.
The approach is already yielding results, James said: “Normally it can sometimes take hundreds of demos to do a new task, while now we can give a handful of demos, maybe ten, and it can do the task.”
Alejandro Escontrela specializes in designing models that extract relevant data from YouTube videos, such as movements of animals, people or other robots. The robot uses these models to inform its own behavior and judge whether a particular movement seems like something it should try out.
In the end, it tries to replicate movements from the videos so that another model looking at it can’t tell if it’s a robot or a real German Shepherd chasing that ball.
Interestingly, many robots like this learn first in a simulation environment and test movements primarily in VR. But as Danijar Hafner explains, the processes have become so efficient that they can skip that test, let the robot romp in the real world, and learn live from interactions like walking, tripping and, of course, being pushed. The advantage of this is that it can learn on the fly instead of having to go back to the simulator to integrate new information, further simplifying the task.
I think the holy grail of robotic learning is to learn as much as you can in the real world, and as fast as you can,” Hafner said. They certainly seem to be moving toward that goal. Watch the full video of the work here of the team.
