Autonomous Systems Lab

At the Autonomous Systems Lab we aim at innovative research on cognitive robot motor skill learning and control based on human motion understanding. The main focus of the research is two-fold: autonomous learning from observations in daily life and cognitive robot control. In order to realize an intuitive robot and to satisfy humans expectations for a robotic companion, we study about human beings and transfer the discovered mechanisms to robotic systems. In this way, the robot can learn new skills without engineers programming and learn complicate tasks incrementally in a generalized framework. Especially by bridging learning from observations, robot motor control, and learning from self practices, robots will be capable of performing complex tasks robustly under uncertainties.


News

  • Jun 2026: M2R2 has been accepted to ICRA 2026!
  • Jun 2026: ATLAS has been accepted to ARSO 2026!
  • Jun 2026: DexTwist has been accepted to ARSO 2026!
  • Feb 2026: Cross-Embodiment Imitation has been accepted to IEEE Robotics & Automation Magazine!
  • Nov 2025: Our “Constraint-Informed Temporal Action Segmentation” paper was presented at ICCAS 2025.
  • Nov 2025: Our “Robot Behavior Generation for Social Human-Robot Interaction” paper was published in International Journal of Social Robotics!
  • Oct 2025: Prof. Dongheui Lee gave a Keynote Talk at IROS2026.
  • Oct 2025: Our “Multimodal Anomaly Detection with a Mixture-of-Experts” paper was accepted at IROS 2025.
  • Sep 2025: Our “Personalized Motion Retargeting through Bidirectional Human-Robot Imitation” paper was accepted at ICDL 2025.
  • Sep 2025: Our “Computational models of the emergence of self-exploration in 2-month-old infants” paper was accepted at ICDL 2025.
  • Jul 2025: Our “Partner familiarity enhances performance in a manual precision task” paper was published in Scientific Reports!
  • Jun 2025: Our “Learning dexterous robot hand control by imitating human hands” paper was accepted at UR 2025.
  • Jun 2025: Our “The balance stabilising benefit of social touch” paper was published in PLOS ONE!
  • Mar 2025: Our paper on prioritized output tracking control was published in IEEE Transactions on Automatic Control!
  • Feb 2025: We are releasing the REASSEMBLE dataset.
  • Feb 2025: Our ConditionNET paper got accepted at Robotics and Automation Letters!
  • Jan 2025: I-CTRL has been accepted to RAM Journal!
  • Jan 2025: Our “Variable Stiffness for Robust Locomotion through Reinforcement Learning” paper was published in IFAC!
  • Dec 2024: Our “Multimodal Transformer Models for Human Action Classification” paper at RiTA has won the reward of the Best Intelligence Paper.
  • Sep 2024: Our “Multimodal Transformer Models for Human Action Classification” paper was accepted at RiTA.
  • Sep 2024: Self-AWare has been accepted to Humanoids 2024 and HFR 2024.
  • Jun 2024: I-CTRL is out! Take a look to control any bipedal humanoid robots by imitating any human motion.
  • Dec 2023: SALADS has been accepted to ICRA 2024.
  • Dec 2023: ECHO has been accepted to ICRA 2024.
  • Dec 2023: UNIMASK-M has been accepted to AAAI 2024.
  • Sep 2023: ImitationNet has been accepted to Humanoids 2023.
  • Sep 2023: HOI4ABOT has been accepted to CoRL 2023.
  • June 2023: HOI-Gaze has been accepted to CVIU Journal 2023.
  • Jan 2023: DiffusionMotion has been accepted to ICRA 2023.
  • Nov 2022: I-CVAE has been accepted to WACV 2023.
  • Oct 2022: We won the ECCV@2022 Ego4D Long-Term Action Anticipation Challenge: First Place Award with I-CVAE.
  • Jun 2022: We won the CVPR@2022 Ego4D Long-Term Action Anticipation Challenge: First Place Award with I-CVAE.
  • Apr 2022: 2CHTR has been accepted to IROS 2022.

Publication Pages

M2R2: MultiModal Robotic Representation for Temporal Action Segmentation

M2R2: MultiModal Robotic Representation for Temporal Action Segmentation

IEEE International Conference on Robotics & Automation (ICRA 2026)

We propose a multimodal robotic representation for temporal action segmentation in long-horizon manipulation tasks.

ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation

ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation

IEEE International Conference on Advanced Robotics and Its Social Impact (ARSO 2026)

We present an annotation tool designed for efficient labeling of long-horizon robotic action segmentation datasets.

DexTwist: Dexterous Hand Retargeting for Twist Motion via Mixed Reality-based Teleoperation

DexTwist: Dexterous Hand Retargeting for Twist Motion via Mixed Reality-based Teleoperation

IEEE International Conference on Advanced Robotics and Its Social Impact (ARSO 2026)

We propose a dexterous hand retargeting method for twist motions using mixed reality-based teleoperation.

Cross-Embodiment Imitation: Learning a Unified Latent Space for Multirobot Control

Cross-Embodiment Imitation: Learning a Unified Latent Space for Multirobot Control

IEEE Robotics & Automation Magazine Journal Paper

We learn a unified latent space enabling cross-embodiment imitation for multirobot control.

Constraint-Informed Temporal Action Segmentation

Constraint-Informed Temporal Action Segmentation

25th International Conference on Control, Automation and Systems (ICCAS 2025)

We propose constraint-informed temporal action segmentation leveraging robot kinematic and task constraints.

Robot Behavior Generation for Social Human-Robot Interaction

Robot Behavior Generation for Social Human-Robot Interaction

International Journal of Social Robotics (2025) Journal Paper

We propose a method for generating robot interaction behaviors conditioned on social context for human-robot interaction.

Multimodal Anomaly Detection with a Mixture-of-Experts

Multimodal Anomaly Detection with a Mixture-of-Experts

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025)

We propose a mixture-of-experts approach for multimodal anomaly detection in robotic manipulation tasks.

Personalized Motion Retargeting through Bidirectional Human-Robot Imitation

Personalized Motion Retargeting through Bidirectional Human-Robot Imitation

IEEE International Conference on Development and Learning (ICDL 2025)

We propose personalized motion retargeting via bidirectional human-robot imitation learning.

Computational models of the emergence of self-exploration in 2-month-old infants

Computational models of the emergence of self-exploration in 2-month-old infants

IEEE International Conference on Development and Learning (ICDL 2025)

We present computational models that account for the emergence of self-exploration behavior in early infancy.

Partner familiarity enhances performance in a manual precision task

Partner familiarity enhances performance in a manual precision task

Scientific Reports, 15(1), 23381 (2025) Journal Paper

We demonstrate that partner familiarity improves performance and coordination in dyadic manual precision tasks.

REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and Disassembly

REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and Disassembly

Robotics: Science and Systems 2025 (RSS 2025)

We release a multimodal dataset for long-horizon contact-rich assembly and disassembly tasks

Learning dexterous robot hand control by imitating human hands

Learning dexterous robot hand control by imitating human hands

22nd International Conference on Ubiquitous Robots (UR 2025)

We learn dexterous robot hand control by directly imitating human hand demonstrations.

The balance stabilising benefit of social touch: influence of an individual's age and the partner's relative body characteristics

The balance stabilising benefit of social touch: influence of an individual's age and the partner's relative body characteristics

PLOS ONE, 20(6), e0314946 (2025) Journal Paper

We investigate how social touch contributes to balance stabilization and how age and partner characteristics influence this effect.

Ultimate Boundedness and Output Convergence of Prioritized Output Tracking Control Under Nonsmooth and Imperfect Feedback Linearization

Ultimate Boundedness and Output Convergence of Prioritized Output Tracking Control Under Nonsmooth and Imperfect Feedback Linearization

IEEE Transactions on Automatic Control (2025) Journal Paper

We analyze the stability and convergence properties of prioritized output tracking control under imperfect feedback linearization.

I-CTRL: Imitation to Control Humanoid Robots Through Constrained Reinforcement Learning

I-CTRL: Imitation to Control Humanoid Robots Through Constrained Reinforcement Learning

IEEE Robotics and Automation Magazine Journal Paper

Control any bipedal humanoid robot by imitating human movements. Any motion, any robot, and in physics-based simulators!

Variable Stiffness for Robust Locomotion through Reinforcement Learning

Variable Stiffness for Robust Locomotion through Reinforcement Learning

IFAC, 59(18), 85-90 (2025)

We leverage variable stiffness control learned via reinforcement learning to achieve robust bipedal locomotion.

ConditionNET: Learning Preconditions and Effects for Anomaly Detection and Recovery

ConditionNET: Learning Preconditions and Effects for Anomaly Detection and Recovery

IEEE Robotics and Automation Letters

Learn Preconditions and Effects of actions in a data-driven manner, and leverage the learned conditions for anomaly detection.

Multimodal Transformer Models for Human Action Classification

Multimodal Transformer Models for Human Action Classification

International Conference on Robot Intelligence Technology and Applications (RiTA 2024) Winner of Best Intelligence Paper

We investigate how best to fuse multimodal information for the task of human action recognition.

Know your limits! Optimize the behavior of bipedal robots through self-awareness

Know your limits! Optimize the behavior of bipedal robots through self-awareness

Humanoids 2024 and HFR 2024

Enhance robot behavior to follow any textual and movement command by recognizing its own limitations and expertise

Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

IEEE International Conference on Robotics and Automation (ICRA 2024)

Generating robot motions in social interactions conditioned on semantics, and without any robot data!

Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstrations

Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstrations

IEEE International Conference on Robotics and Automation (ICRA 2024)

We demonstrate efficient and safe human-robot collaboration through shared autonomy for industrial tasks.

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

AAAI 2024

How can we tackle all variations of human motion synthesis using a unique architecture? UNIMASK-M

Unsupervised human-to-robot motion retargeting via expressive latent space

Unsupervised human-to-robot motion retargeting via expressive latent space

Humanoids 2023

Learn how real robots can imitate human movements from different modalities in an unsupervised manner

HOI4ABOT: Human-Object Interaction Anticipation for Assistive roBOTs

HOI4ABOT: Human-Object Interaction Anticipation for Assistive roBOTs

Conference on Robot Learning (CoRL 2023)

Detect and anticipate human-object interactions for intention reading, which facilitate robots to assist humans.

Human–object interaction prediction in videos through gaze following

Human–object interaction prediction in videos through gaze following

CVIU 2023 Journal Paper

Leveraging gaze provides essential cues for predicting the human intention, which helps to anticipate the human-object interactions.

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

IEEE International Conference on Robotics and Automation (ICRA 2023)

Diffusion models offer the right balance between likelihood and diversity when synthesizing human motions from past observations.

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2023) Winner of Ego4D LTA Challenge in CVPR2022 and ECCV2022

Understanding the human intention is key for a better prediction of what a human will do in the long-term.

Robust Human Motion Forecasting using Transformer-based Model

Robust Human Motion Forecasting using Transformer-based Model

International Conference on Intelligent Robots and Systems (IROS 2022)

Decoupling space and time in human motion forecasting allows for more robust and efficient models.