Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

Abstract

Simulation-to-real transfer is an important strategy for making reinforcement learning practical with real robots. Successful sim-to-real transfer systems have difficulty producing policies which generalize across tasks, despite training for thousands of hours equivalent real robot time. To address this challenge, we present a novel approach to efficiently performing new robotic tasks directly on a real robot, based on model-predictive control (MPC) and learned task representations. Rather than end-to-end learning policies for single tasks in simulation and attempting to transfer them, we use simulation to learn (1) an embedding function encoding a latent representation of task components (skills), and (2) a single latent-conditioned policy for all tasks, and directly transfer the frozen policy to the real robot. We then use MPC to perform new tasks without any exploration in the real environment, by choosing latent skill vectors to feed to the frozen policy, controlling the real system in skill latent space. Our MPC model is the frozen skill latent-conditioned policy, executed in the simulation environment, run in parallel with the real robot. In short, we show how to reuse the simulation from the pre-training step of sim-to-real methods as a tool for foresight, allowing the sim-to-real policy adapt to unseen tasks. We discuss the background and principles of our method, detail its practical implementation, and evaluate its performance by using our method to train a real Sawyer Robot to achieve motion tasks such as drawing and block pushing.

Publication
In Conference on Neural Information Processing Systems 2018 Deep RL Workshop.