Abstract

Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning

Mark Edmonds1,4, Xiaojian Ma1, Siyuan Qi1,4, Yixin Zhu2,4,Hongjing Lu2,3, Song-Chun Zhu1,2,4
1 Department of Computer Science, UCLA | 2 Department of Statistics, UCLA | 3 Department of Pyschology, UCLA
4 International Center for AI and Robot Autonomy (CARA)

Abstract

Learning transferable knowledge across similar but different settings is a fundamental component of generalized intelligence. In this paper, we approach the transfer learning challenge from a causal theory perspective. Our agent is endowed with two basic yet general theories for transfer learning: (i) a task shares a common abstract structure that is invariant across domains, and (ii) the behavior of specific features of the environment remain constant across domains. We adopt a Bayesian perspective of causal theory induction and use these theories to transfer knowledge between environments. Given these general theories, the goal is to train an agent by interactively exploring the problem space to (i) discover, form, and transfer useful abstract and structural knowledge, and (ii) induce useful knowledge from the instance-level attributes observed in the environment. A hierarchy of Bayesian structures is used to model abstract-level structural causal knowledge, and an instance-level associative learning scheme learns which specific objects can be used to induce state changes through interaction. This model-learning scheme is then integrated with a model-based planner to achieve a task in the OpenLock environment, a virtual "escape room" with a complex hierarchy that requires agents to reason about an abstract, generalized causal structure. We compare performances against a set of predominate model-free RL algorithms. RL agents showed poor ability transferring learned knowledge across different trials. Whereas the proposed model revealed similar performance trends as human learners, and more importantly, demonstrated transfer behavior across trials and learning situations.


Selected Figures

Causal hierarchy
Figure 3: Illustration of top-down and bottom-up processes. (a) Abstract-level structure learning hierarchy. At the top, atomic schemas provide the agent with environment-invariant task structures. At the bottom, causal subchains represent a single time-step in the environment. The agent constructs the hierarchy and makes decisions at the causal subchain resolution. Atomic schemas $g_M$ provide the top-level structural knowledge. Abstract schemas $g_A$ are structures specific to a task, but not a particular environment. Instantiated schemas $g_I$ are structures specific to a task and a particular environment. Causal chains $c$ are structures representing a single attempt; an abstract, uninstantiated causal chain is also shown for notation. Each subchain $c_i$ is a structure corresponding to a single action. PL, PH, L, U denote fluents pulled, pushed, locked, and unlocked, respectively. (b) The subchain posterior computed using the abstract-level structure learning and instance-level inductive learning. (c) Instance-level inductive learning. Each likelihood term is learned from causal events,ρi. Likelihood terms are combined for actions, positions, and colors.
Model results
Figure 5: Model performance vs.human performance. (a) Proposed model baseline results forCC4/CE4. We see an asymmetry between the difficulty of CC and CE. (b) Human baseline performance (Edmonds et al.2018). (c) Proposed model transfer results for training in CC3/CE3. The transfer results show that transferring to an incongruent CE4 condition (i.e., different structure,additional lever; i.e., CC3 to CE4) was more difficult than transferring to a congruent condition (i.e., same structure, additional lever; i.e., CE3 to CE4). However, the agent did not show a significant difference in difficulty when transferring to congruent or incongruent condition for the CC4 transfer condition. (d) Human transfer performance (Edmonds et al. 2018).

Bibtex

@inproceedings{edmonds2020theory,
  title={Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning},
  author={Edmonds, Mark and Ma, Xiaojian and Qi, Siyuan and Zhu, Yixin and Lu, Hongjing and Zhu, Song-Chun},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2020}
}