Cross Domain Policy Transfer with Effect Cycle-Consistency

1 King's College London 2 University of Aberdeen

International Conference on Robotics and Automation (ICRA), 2024

Abstract

Training a robotic policy from scratch using deep reinforcement learning methods can be prohibitively expensive due to sample inefficiency. To address this challenge, transferring policies trained in the source domain to the target domain becomes an attractive paradigm. Previous research has typically focused on domains with similar state and action spaces but differing in other aspects. In this paper, our primary focus lies in domains with different state and action spaces, which has broader practical implications, i.e. transfer the policy from robot A to robot B. Unlike prior methods that rely on paired data, we propose a novel approach for learning the mapping functions between state and action spaces across domains using unpaired data. We propose effect cycle-consistency, which aligns the effects of transitions across two domains through a symmetrical optimization structure for learning these mapping functions. Once the mapping functions are learned, we can seamlessly transfer the policy from the source domain to the target domain. Our approach has been tested on three locomotion tasks and two robotic manipulation tasks. The empirical results demonstrate that our method can reduce alignment errors significantly and achieve better performance compared to the state-of-the-art method.


Motivation


Figure 1. Unlike prior works which attempt to predict the next state for learning mapping functions, which is reported to be prone to compounding errors, we aim to align the effect of the transitions across domains. For instance, the effect of transition in Domain A is moving the gripper from the left side to the right side. The effect of the translated transition in the Domain B is expected to move the gripper from the left side to the right side as well.



Method Overview


Figure 2. We show the main terminologies from both the source domain and the target domains, and the main objective in our method.

Figure 3. The training structure. We apply the same objectives of learning mapping functions from source domains to target domains to learning from target domains to source domains. We attempt to discover the translated action distribution that can lead to the same effect in the target domain as the source domain.



Experiment Results


Figure 4. Visualization of source domains and target domains. We carried out experiments on three locomotion tasks and two robotic manipulation tasks

TABLE I The performance of the transferred policy under different morphologies. (w.o. denotes without)

The behaviours of the transferred policies through the learned mapping functions.

BibTeX

@misc{zhu2024cross,
      title={Cross Domain Policy Transfer with Effect Cycle-Consistency},
      author={Ruiqi Zhu and Tianhong Dai and Oya Celiktutan},
      year={2024},
      eprint={2403.02018},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}