Sai Haneesh Allu

I'm a Ph.D. candidate in Computer Science at the University of Texas at Dallas, advised by Dr. Yu Xiang in the Intelligent Robotics and Vision Lab.

My research focuses on robot learning for mobile manipulation tasks in unstructured real-world environments. I develop frameworks for learning human skills from videos and transferring them via optimization. I also work on semantic scene representations for long-horizon planning to enable robots to navigate and perform complex tasks autonomously. My work intersects robot learning, mobile manipulation and semantic exploration.

Previously, I completed my Masters in Control and Automation at IIT Delhi, advised by Dr. Shubhendu Bhasin and my Bachelors in Electrical and Electronics Engineering at NIT Warangal. I also co-founded VECROS Technologies where I led the development of autonomous quadrotor systems.

Sai Haneesh Allu

Publications * denotes equal contribution and joint lead authorship

HRT1
HRT1: One-Shot Human-to-Robot Trajectory Transfer for Mobile Manipulation
Sai Haneesh Allu*, Jishnu Jaykumar P*, Ninad Khargonkar, Tyler Summers, Jian Yao, Yu Xiang
Science Robotics (Under Submission)
Website PDF Code
We introduce a novel system for human-to-robot trajectory transfer that enables robots to manipulate objects by learning from human demonstration videos. The system consists of four modules. The first module is a data collection module that is designed to collect human demonstration videos from the point of view of a robot using an AR headset. The second module is a video understanding module that detects objects and extracts 3D human-hand trajectories from demonstration videos. The third module transfers a human-hand trajectory into a reference trajectory of a robot end-effector in 3D space. The last module utilizes a trajectory optimization algorithm to solve a trajectory in the robot configuration space that can follow the end-effector trajectory transferred from the human demonstration. Consequently, these modules enable a robot to watch a human demonstration video once and then repeat the same mobile manipulation task in different environments, even when objects are placed differently from the demonstrations.
@article{2025hrt1, title = {HRT1: One-Shot Human-to-Robot Trajectory Transfer for Mobile Manipulation}, author = {Allu, Sai Haneesh and P, Jishnu Jaykumar and Khargonkar, Ninad and Summers, Tyler and Yao, Jian and Xiang, Yu}, journal = {arXiv}, year = {2025} }
Semantic Mapping
A Modular Robotic System for Autonomous Exploration and Semantic Updating
Sai Haneesh Allu, Itay Kadosh, Tyler Summers, Yu Xiang
ICRA, 2026 (Under Review)
Website PDF Code
We present a modular robotic system for autonomous exploration and semantic updating of large-scale unknown environments. Our approach enables a mobile robot to build, revisit, and update a hybrid semantic map that integrates a 2D occupancy grid for geometry with a topological graph for object semantics. Unlike prior methods that rely on manual teleoperation or precollected datasets, our two-phase approach achieves end-to-end autonomy: first, a modified frontier-based exploration algorithm with dynamic search windows constructs a geometric map; second, using a greedy trajectory planner, environments are revisited, and object semantics are updated using open-vocabulary object detection and segmentation. This modular system, compatible with any metric SLAM framework, supports continuous operation by efficiently updating the semantic graph to reflect short-term and long-term changes such as object relocation, removal, or addition. We validate the approach on a Fetch robot in real-world indoor environments of approximately 8, 500m^2 and 117m^2, demonstrating robust and scalable semantic mapping and continuous adaptation, marking a fully autonomous integration of exploration, mapping, and semantic updating on a physical robot.
@article{allu2024modular, title={A Modular Robotic System for Autonomous Exploration}, author={Allu, Sai Haneesh and others}, year={2026} }
GraspTrajOpt
Grasping Trajectory Optimization with Point Clouds
Yu Xiang, Sai Haneesh Allu, Rohith Peddi, Tyler Summers, Vibhav Gogate
IROS, 2024
Oral Presentation
Website PDF Code
We introduce a new trajectory optimization method for robotic grasping based on a point-cloud representation of robots and task spaces. In our method, robots are represented by 3D points on their link surfaces. The task space of a robot is represented by a point cloud that can be obtained from depth sensors. Using the point-cloud representation, goal reaching in grasping can be formulated as point matching, while collision avoidance can be efficiently achieved by querying the signed distance values of the robot points in the signed distance field of the scene points. Consequently, a constrained nonlinear optimization problem is formulated to solve the joint motion and grasp planning problem. The advantage of our method is that the point-cloud representation is general to be used with any robot in any environment. We demonstrate the effectiveness of our method by performing experiments on a tabletop scene and a shelf scene for grasping with a Fetch mobile manipulator and a Franka Panda arm.
@inproceedings{xiang2024grasping, title={Grasping Trajectory Optimization with Point Clouds}, author={Xiang, Yu and Allu, Sai Haneesh and others}, booktitle={IROS}, year={2024} }
SceneReplica
SceneReplica: Benchmarking Real-World Robot Manipulation
Ninad Khargonkar*, Sai Haneesh Allu*, Yangxiao Lu, Jishnu Jaykumar P, Balakrishnan Prabhakaran, Yu Xiang
ICRA, 2024
Oral Presentation
Website PDF Code
We present a new reproducible benchmark for evaluating robot manipulation in the real world, specifically focusing on a pick-and-place task. Our benchmark uses the YCB object set, a commonly used dataset in the robotics community, to ensure that our results are comparable to other studies. Additionally, the benchmark is designed to be easily reproducible in the real world, making it accessible to researchers and practitioners. We also provide our experimental results and analyzes for model-based and model-free 6D robotic grasping on the benchmark, where representative algorithms are evaluated for object perception, grasping planning, and motion planning. We believe that our benchmark will be a valuable tool for advancing the field of robot manipulation. By providing a standardized evaluation framework, researchers can more easily compare different techniques and algorithms, leading to faster progress in developing robot manipulation methods.
@inproceedings{khargonkar2024scenereplica, title={SceneReplica: Benchmarking Real-World Robot Manipulation}, author={Khargonkar, Ninad and Allu, Sai Haneesh and others}, booktitle={ICRA}, year={2024} }
Formation Control
Formation Control of Quadcopters
Sai Haneesh Allu
M.S. Thesis, IIT Delhi, 2020
Thesis Video Code
The primary purpose of the study is to investigate various formation control algorithms as well as implementing them on an experimental platform with the ultimate goal of implementing target interception by choosing the best suited among the implemented algorithms. The open source nanoquadcopter platform crazyflie 2.0 was choosen for the experimentation and ardupilot flight stack along with dronekit software in the loop were used for simulation purpose. The first phase consisted of study of virtual structure, leader-follower, a graph theoretic method of formation control. Secondly, understanding the control architecture of crazyflie 2.0, system setup and operation of optitrack motion capture system, robot operating system and dronekit Software in the loop. Up next is the controller design of the above mentioned formation control algorithms and implementation on the chosen platforms, comparing their performance in formation sustenance. Comparison shows that graph theoretic method is best suitable for formation maintenance. Finally Target interception has been simulated using the graph theoretic method and further exploitation of velocity and trajectory based formation controls are proposed as future work through optimisation techniques.

Industry Experience

2020 – 2021
VECROS | Co-Founder and CTO
Developed an edge-processed Visual Inertial Odometry system and a mapless reactive planner for GPS-denied navigation. Led the team in building a web-based BVLOS control platform using AWS IoT.
2016 – 2017
Sterlite Tech | Operations Engineer
Investigated the optical fiber spooling process and implemented a grounding mechanism to reduce process failures.

Service & Leadership

Organizer
Workshop on Neural Representation Learning for Robot Manipulation (CoRL 2023)
Reviewer
IROS 2024, ICRA 2025, ICRA 2026
Teaching
UT Dallas: Computer Graphics, Human-Computer Interaction
IIT Delhi: Stochastic filtering, Multi-agent control, Advanced Control Lab