Annotating long-horizon robotic demonstrations with precise temporal action boundaries is crucial for
training and evaluating action segmentation and manipulation policy learning methods. Existing annotation
tools, however, are often limited: they are designed primarily for vision-only data, do not natively
support synchronized visualization of robot-specific time-series signals (e.g., gripper state or
force/torque), or require substantial effort to adapt to different dataset formats. In this paper, we
introduce ATLAS, an annotation tool tailored for long-horizon robotic action
segmentation. ATLAS provides time-synchronized visualization of multi-modal robotic data, including
multi-view video and proprioceptive signals, and supports annotation of action boundaries, action labels,
and task outcomes. The tool natively handles widely used robotics dataset formats such as ROS bags and
the Reinforcement Learning Dataset (RLDS) format, and provides direct support for specific datasets such
as REASSEMBLE. ATLAS can be easily extended to new formats via a modular dataset abstraction layer. Its
keyboard-centric interface minimizes annotation effort and improves efficiency. In experiments on a
contact-rich assembly task, ATLAS reduced the average per-action annotation time by at least
6% compared to ELAN, while the inclusion of time-series data improved temporal
alignment with expert annotations by more than 2.8% and decreased boundary error
fivefold compared to vision-only annotation tools.