About Me

  • My Mission: To make the world as smart as possible. My life goal lies at the intersection of intelligence and system design; I want to create machine intelligence that will increase access to education and further general technological progress.
  • My mission approach: I study artificial intelligence (AI) and robotics from a perspective inspired by human cognition. I'm currently a PhD candidate advised by Dr. Song-Chun Zhu at Center for Vision, Cognition, Learning, and Autonomy (VCLA) at UCLA.


University of California, Los Angeles

PhD in Computer Science, Artificial Intelligence concentration Expected June 2021

  • Dissertation: "Learning how and why: Causal learning and explanation from physical and interactive environments"

University of California, Los Angeles

M.S. in Computer Science June 2017

  • Thesis: "Learning Complex Functional Manipulations by Human Demonstration and Fluent Discovery"

University of Dayton

B.S. in Computer Engineering, Magna Cum Laude May 2015

  • Thesis: "High­-Performance Declarative Memory through MapReduce"



Research Overview

Humans build generalizable and explainable representations of their environment through interaction, observation, imitation, intervention, and language. The following research produced from a collaborative, team-based lab explores how artificial agents can use these five concepts to learn robust and transferable representations of tasks and environments.

Representation learning: The role of language in building generalizable representations

Center for Vision, Cognition, Learning, and Autonomy, UCLA Mar 2018 - Present

  • Created a virtual playground for embodied AI to learn interpretable, common-sense representations of its environment.
  • Built a simulated environment using Unreal Engine 4 (UE4) that couples language and vision in a scene graph.
  • Devised a dataset consisting of images, language labels, and object segmentation.
  • Ongoing: Build representation learning algorithm to use language labels to decompose latent encoding of the environment.
Center for Vision, Cognition, Learning and Autonomy logo
Explainable AI: How can robots explain their behavior to foster trust from humans?

Center for Vision, Cognition, Learning, and Autonomy, UCLA Feb 2018 - Dec 2019

  • Expanded prior imitation learning work by building human-understandable visual interfaces to describe the robot's haptic network and And-Or Graph.
  • Showed that robots may need to "think" one way to complete a task but another way to explain their behavior: the explanations that best fostered trust were not the model components that best aided the robot to achieve the task.
  • Result: Consider explainability as a first-class citizen when building AI systems that interact with humans.
Center for Vision, Cognition, Learning and Autonomy logo
Jet Propulsion Laboratory logo
Causal learning: Virtual escape room to examine how humans and AI learn transferable causal representations

Center for Vision, Cognition, Learning, and Autonomy, UCLA Feb 2017 - Feb 2020

  • Built virtual "escape room" to test causal generalization; surface-level features change room to room while each room is governed by a common abstract causal structure (series of levers) that describes the required actions to "unlock" the room.
  • Ran human subject experiments to verify human learners are capable of discovering the correct abstract causal structure.
  • Built hierarchical Bayesian model to achieve similar performance as human learners. This causal model was able to solve the escape room while seven state-of-the-art model-free reinforcement learning algorithms failed at the task.
  • Result: Both structural abstraction and feature generalization are critical for transfer learning and generalization.
Center for Vision, Cognition, Learning and Autonomy logo
Jet Propulsion Laboratory logo
Imitation learning: Training a robot to twist open a medicine bottle

Center for Vision, Cognition, Learning, and Autonomy, UCLA December 2015 - Feb 2017

  • Captured the complex human hand forces required to open seven different medicine bottles with a tactical glove covered in IMUs (inertial measurement units) and force sensors.
  • Constructed robot action planner using a haptic network, And-Or Graph, and the generalized Earley parser.
  • Result: Questioned the common structure humans see in the procedure to open any medicine bottle vs. the widely varying forces and action sequences used by the robot for each bottle. This prompted an investigation of abstraction and generalization...
Engineering contributions:
  • Neural network training for action planning and embodiment mapping between a human demonstrator and a robot
  • Localization using SLAM, IMU, and wheel odometry combined with Kalman filtering using a Microsoft Kinect and Velodyne VLP16
  • ROS navigation stack, including a dynamic footprint based on current position of arms
Center for Vision, Cognition, Learning and Autonomy logo
Jet Propulsion Laboratory logo
Hardware Accelerated Declarative Memory for ACT-R

Wright Patterson Air Force Base, University of Dayton January 2014 - September 2015

  • Conducted Declarative Memory (semantic knowledge retrieval system) research for the ACT-R cognitive architecture.
  • Architected a new declarative memory system using CUDA, thread pools, parsers, inter-process communication (IPC).
  • Continued project work between summers as undergrad thesis research.
  • Result: Parallelized declarative retrievals; yielded a 100x speedup over the fastest existing implementation.
University of Dayton logo
Air Force Research Lab logo
Robotic Arm Brain Machine Interface

University of Dayton Senior Design Project August 2014 - May 2015

  • Expanded the capability of a brain machine interface through EEG signals and a robotic arm.
  • Developed EEG signal classifier using Linear Discriminant Analysis (LDA)
  • Added six additional gestures and improving the universality of the interface.
University of Dayton logo


ACRE: Abstract Causal REasoning Beyond Covariation
C. Zhang, B. Jia, M. Edmonds, S.C. Zhu Y. Zhu,

CVPR 2021

Conference Paper Causal Reasoning Neuro-symbolic Reasoning

A tale of two explanations: Enhancing human trust by explaining robot behavior
M. Edmonds, F. Gao*, H. Liu*, X. Xie*, S. Qi, B. Rothrock, Y. Zhu, Y.N. Wu, H. Lu, S.C. Zhu
* equal contributors

Science Robotics, Volume 4, Issue 37, 2019

Journal Publication Explainable AI (XAI)

Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning
M. Edmonds, X. Ma, S. Qi, Y. Zhu, H. Lu, S.C. Zhu

Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020

Conference Paper Oral Presentation Causal Learning

Decomposing Human Causal Learning: Bottom-up Associative Learning and Top-down Schema Reasoning

41st Annual Meeting of the Cognitive Science Society (CogSci), 2019

Conference Paper Causal Learning

Hardware Accelerated Semantic Declarative Memory Systems through CUDA and MapReduce
M. Edmonds, T. Atahary, S. Douglass, T. Taha.

IEEE Transactions on Parallel and Distributed Systems (TDPS), March 2019

Journal Publication Declarative Memory
causal simulator causal structures

Human Causal Transfer: Challenges for Deep Reinforcement Learning
M. Edmonds*, J. Kubricht*, C. Summers, Y. Zhu, B. Rothrock, S.C. Zhu, H. Lu.
* equal contributors

40th Annual Meeting of the Cognitive Science Society (CogSci), 2018

Conference Paper Oral Presentation Causal Learning

Unsupervised Learning of Hierarchical Models for Hand-Object Interactions
X. Xie*, H. Liu*, M. Edmonds, F. Gao, S. Qi, Y. Zhu, B. Rothrock, S.C. Zhu.
* equal contributors

International Conference on Robotics and Automation (IRCA), 2018

Conference Paper Learning from Demonstration
action clustering temporal and-or graph

Feeling the Force: Integrating Force and Pose for Fluent Discovery through Imitation Learning to Open Medicine Bottles
M. Edmonds*, F. Gao*, X. Xie, H. Liu, S. Qi, Y. Zhu, B. Rothrock, S.-C. Zhu.
* equal contributors

International Conference on Intelligent Robots and Systems (IROS), 2017

Conference Paper Learning from Demonstration
Open bottle demo 1 Open bottle demo 2

A Glove-based System for Studying Hand-Object Manipulation via Pose and Force Sensing
H. Liu*, X. Xie*, M. Millar*, M. Edmonds, F. Gao, Y. Zhu, V. Santos, B. Rothrock, S.C. Zhu.
* equal contributors

International Conference on Intelligent Robots and Systems (IROS), 2017

Conference Paper Learning from Demonstration
Glove visualization 1 Glove visualization 2

High Performance Declarative Memory Systems through MapReduce
M. Edmonds, T. Atahary, S. Douglass, T. Taha.

Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015

Conference Paper Declarative Memory
declarative memory declarative memory fan optimzation

Brain Machine Interface Using Emotiv EPOC to Control Robai Cyton Robotic Arm

Aerospace and Electronics Conference (NAECON), 2015.

Conference Paper Brain Machine Interface

Work Experience

Director and President

Center for AI and Robot Autonomy (CARA) Mar 2021 - Present

  • Define updated mission and vision statements, seek alternative funding, and file paperwork to maintain 501(c)(3) status
Adjunct Professor

Santa Monica College June 2016 - Present

  • Teaching one-two 45-student classes/quarter: lead lectures, hold office hours and create course materials.
  • Instructed 25 courses: Internet Programming (HTML, CSS, JavaScript, MySQL, and PHP), Intro to C, and Intro to C++.
Robotics Research Engineer Intern

Center for AI and Robot Autonomy (CARA) June 2018 - Mar 2020

Teaching Assistant

Computer Science Department, UCLA September 2015 - June 2016

  • Teaching assistant for Introduction to C, Introduction to C++: lead discussion hours (~50 students), held office hours.
Teaching Assistant

Electric & Computer Engineering Department, University of Dayton January 2015 - May 2015

  • Teaching assistant for Electronic Devices Lab: aided in lab sessions (~40 students).
Software Engineering Intern

Garmin International May 2013 - August 2013

  • Automated the testing process for small craft airplane ACARS systems that send timed status messages to ground stations.
  • Reduced testing time by 40% and saved hundreds of vendor certification testing hours by optimizing simulation timing protocols and adhering to FAA safety standards.
Enrichment Workshop Tutor

School of Engineering, University of Dayton September 2012 - May 2015

  • Tutor for first-year engineering students covering calculus, chemistry, and physics.
  • Lead team of tutors: managed 8 tutors overseeing 40 students.
Summer School Teacher

Cristo Rey Kansas City May 2011 - August 2012

  • Taught four junior and senior student classes at a prep school focused on college placement for underrepresented groups.


  • Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven't found it yet, keep looking. Don't settle. As with all matters of the heart, you'll know when you find it.

    Steve Jobs
  • One finds limits by pushing them.

    Herbert Simon
  • The world needs dreamers and the world needs doers. But above all, the world needs dreamers who do.

    Sarah Ban Breathnach
  • People think that computer science is the art of geniuses but the actual reality is the opposite, just many people doing things that build on each other, like a wall of mini stones.

    Donald Knuth
  • I've always tried to go a step past wherever people expected me to end up.

    Beverly Sills

Get In Touch.

If you have any questions about me, my research interests, or my work, please reach out. Interesting thoughts from interesting people are always welcome. mark@mjedmonds.com