Autonomous Systems Lab

At the Autonomous Systems Lab we aim at innovative research on cognitive robot motor skill learning and control based on human motion understanding. The main focus of the research is two-fold: autonomous learning from observations in daily life and cognitive robot control. In order to realize an intuitive robot and to satisfy humans expectations for a robotic companion, we study about human beings and transfer the discovered mechanisms to robotic systems. In this way, the robot can learn new skills without engineers programming and learn complicate tasks incrementally in a generalized framework. Especially by bridging learning from observations, robot motor control, and learning from self practices, robots will be capable of performing complex tasks robustly under uncertainties.


News

  • Feb 2025: We are releasing the REASSEMBLE dataset.
  • Feb 2025: Our ConditionNET paper got accepted at Robotics and Automation Letters!
  • Jan 2025: I-CTRL has been accepted to RAM Journal!
  • Dec 2024: Our “Multimodal Transformer Models for Human Action Classification” paper at RiTA has won the reward of the Best Intelligence Paper.
  • Sep 2024: Our “Multimodal Transformer Models for Human Action Classification” paper was accepted at RiTA.
  • Sep 2024: Self-AWare has been accepted to Humanoids 2024 and HFR 2024.
  • Jun 2024: I-CTRL is out! Take a look to control any bipedal humanoid robots by imitating any human motion.
  • Dec 2023: SALADS has been accepted to ICRA 2024.
  • Dec 2023: ECHO has been accepted to ICRA 2024.
  • Dec 2023: UNIMASK-M has been accepted to AAAI 2024.
  • Sep 2023: ImitationNet has been accepted to Humanoids 2023.
  • Sep 2023: HOI4ABOT has been accepted to CoRL 2023.
  • June 2023: HOI-Gaze has been accepted to CVIU Journal 2023.
  • Jan 2023: DiffusionMotion has been accepted to ICRA 2023.
  • Nov 2022: I-CVAE has been accepted to WACV 2023.
  • Oct 2022: We won the ECCV@2022 Ego4D Long-Term Action Anticipation Challenge: First Place Award with I-CVAE.
  • Jun 2022: We won the CVPR@2022 Ego4D Long-Term Action Anticipation Challenge: First Place Award with I-CVAE.
  • Apr 2022: 2CHTR has been accepted to IROS 2022.

Publication Pages

REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and Disassembly

REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and Disassembly

Robotics: Science and Systems 2025 (RSS 2025)

We release a multimodal dataset for long-horizon contact-rich assembly and disassembly tasks

Multimodal Transformer Models for Human Action Classification

Multimodal Transformer Models for Human Action Classification

International Conference on Robot Intelligence Technology and Applications (RiTA 2024) Winner of Best Intelligence Paper

We investigate how best to fuse multimodal information for the task of human action recognition.

ConditionNET: Learning Preconditions and Effects for Anomaly Detection and Recovery

ConditionNET: Learning Preconditions and Effects for Anomaly Detection and Recovery

IEEE Robotics and Automation Letters

Learn Preconditions and Effects of actions in a data-driven manner, and leverage the learned conditions for anomaly detection.

Know your limits! Optimize the behavior of bipedal robots through self-awareness

Know your limits! Optimize the behavior of bipedal robots through self-awareness

Humanoids 2024 and HFR 2024

Enhance robot behavior to follow any textual and movement command by recognizing its own limitations and expertise

I-CTRL: Imitation to Control Humanoid Robots Through Constrained Reinforcement Learning

I-CTRL: Imitation to Control Humanoid Robots Through Constrained Reinforcement Learning

IEEE Robotics and Automation Magazine Journal Paper

Control any bipedal humanoid robot by imitating human movements. Any motion, any robot, and in physics-based simulators!

Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

IEEE International Conference on Robotics and Automation (ICRA 2024)

Generating robot motions in social interactions conditioned on semantics, and without any robot data!

Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstrations

Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstrations

IEEE International Conference on Robotics and Automation (ICRA 2024)

We demonstrate efficient and safe human-robot collaboration through shared autonomy for industrial tasks.

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

AAAI 2024

How can we tackle all variations of human motion synthesis using a unique architecture? UNIMASK-M

Unsupervised human-to-robot motion retargeting via expressive latent space

Unsupervised human-to-robot motion retargeting via expressive latent space

Humanoids 2023

Learn how real robots can imitate human movements from different modalities in an unsupervised manner

HOI4ABOT: Human-Object Interaction Anticipation for Assistive roBOTs

HOI4ABOT: Human-Object Interaction Anticipation for Assistive roBOTs

Conference on Robot Learning (CoRL 2023)

Detect and anticipate human-object interactions for intention reading, which facilitate robots to assist humans.

Human–object interaction prediction in videos through gaze following

Human–object interaction prediction in videos through gaze following

CVIU 2023 Journal Paper

Leveraging gaze provides essential cues for predicting the human intention, which helps to anticipate the human-object interactions.

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

IEEE International Conference on Robotics and Automation (ICRA 2023)

Diffusion models offer the right balance between likelihood and diversity when synthesizing human motions from past observations.

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2023) Winner of Ego4D LTA Challenge in CVPR2022 and ECCV2022

Understanding the human intention is key for a better prediction of what a human will do in the long-term.

Robust Human Motion Forecasting using Transformer-based Model

Robust Human Motion Forecasting using Transformer-based Model

International Conference on Intelligent Robots and Systems (IROS 2022)

Decoupling space and time in human motion forecasting allows for more robust and efficient models.