UBC researchers develop a novel training framework that merges human demonstration with environmental feedback to boost construction robot performance on construction sites
Researchers at the University of British Columbia have unveiled a groundbreaking method that enables construction robots to learn tasks more efficiently by mimicking human actions in virtual reality and refining their behaviour through environmental rewards.
Published in the journal Computer-Aided Civil and Infrastructure Engineering, the study by Kangkang Duan, Zhengbo Zou, and T.Y. Yang introduces a hybrid learning framework that combines imitation learning (IL) with reinforcement learning (RL) in a virtual setting — significantly advancing the training process for autonomous construction robots.
“Our approach trains robots not just to imitate human workers, but to improve upon them by learning from the environment,” said co-author Prof. T.Y. Yang. “This enables a balance between human expertise and machine-level optimisation.”
The Innovation
- Intuitive Gesture-Based VR Control: Instead of using controllers, experts demonstrate tasks via natural hand gestures captured in VR, which are translated directly into robotic commands.
- Hybrid Reward Framework: Robots learn from expert demonstrations (intrinsic rewards) and simultaneously receive environmental feedback (extrinsic rewards), accelerating policy learning.
- Simplified Reward Design: Unlike traditional RL, which needs complex reward functions, this method uses a simple, sparse reward system made feasible by IL pretraining.
Experimental Breakthroughs
Using the widely adopted six-joint xArm robot, the team simulated a window installation task. Their method:
- Outperformed state-of-the-art RL (e.g., PPO) and imitation methods (e.g., GAIL) in task completion and stability.
- Required fewer demonstrations and showed robustness even with small datasets.
- Achieved a 97% success rate in pick-and-place operations under real-world constraints.
- Enabled collision-free installation via trajectory smoothing and inverse kinematics.
“This hybrid approach dramatically improves learning efficiency and real-world applicability,” said lead author Kangkang Duan. “It reflects a paradigm shift in how we train machines for dynamic construction environments.”
Real-World Impact
This study addresses a growing need in construction: safe, efficient automation of repetitive and hazardous tasks. The framework can be extended to other robots and tasks, offering scalability and flexibility across the industry.
Key applications include:
- Structural assembly
- Tile and material handling
- Autonomous site navigation and teleoperation
Challenges & Next Steps
The research team acknowledged limitations, such as reliance on high-precision tracking systems and a predefined gesture library. Future work will include:
- Expanding to deformable object manipulation
- Enhancing generalization with self-supervised learning and dynamic ensemble methods
- Reducing hardware dependence via cloud-based simulation
The post Construction robots learn smarter and faster through virtual reality and imitation, study finds appeared first on Planning, Building & Construction Today.