SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning

ICRA 2022

Jun Lv*
Qiaojun Yu*
Lin Shao*
Wenhai Liu
Wenqiang Xu
Cewu Lu

[ArXiv]
[Supp]
[Bibtex]

Abstract

Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. It requires the robot learning to be sample-efficient, generalizable, compositional, and incremental. In this work, we introduce a systematic learning framework called SAGCI-system towards achieving these above four requirements. Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a file of Unified Robot Description Format (URDF). Our system adopts a learning-augmented differentiable simulation that loads the URDF. The robot then utilizes the interactive perception to interact with the environment to online verify and modify the URDF. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world. Extensive experiments demonstrate the effectiveness of our proposed learning framework.


Video

[Bilibili]
[YouTube]

Paper

SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning.
ICRA, 2022.
(ArXiv)




Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.