Tsinghua Hephaestus

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description


Please give a brief summary of your walking algorithm (max. 1000 characters).

To realize robust bipedal walking in RoboCup, we utilize the ZMP trajectory from Footprints as input to generate the Divergent Component of Motion(DCM) and Center of Mass Trajectory(CoM). The motivation and more details could be found in[1], we actually plan the DCM instead of CoM. The ZMP based gait generation has shown its stable performance in RoboCup2018 and 2019, however, the speed of walking is still too slow for the 9mx14m field. We are trying to use a more accurate whole-body controller to replace the DCM PD controller. And momentum-based optimization will generate a better high kick and high jump motion.

Attached file →


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

The visual perception system depends on the input of a head-mounted StereoLabs Zed 2 stereo camera. And we use deep neural networks for accurate and efficient object recognition in the competition field. Compared to the visual recognition system of last year, we revise the YOLO v3 tiny[1] model and run the full visual detection pipeline on Nvidia Xavier processor. With the increase of the size of the field, the probability of small and medium-sized objects in long distant has increased. YOLO v3 tiny is a state-of-the-art, real-time object detection system and shows promising results in many scenarios. However, it is still a little bit weak for recognizing small objects. Therefore, we add more images collected in long distant into our training dataset and revise our visual model deeper and more adaptive to the larger competition field.

Attached file →


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

The localization system[1] consists of two parts: the global localization system and the local localization system. Both of them based on AMCL (Adaptive Monte-Carlo Localization). The global localization system aims to find out where the robot without previous location information and help to solve the kidnapped problem. It receives the type and location of landmarks that insight and a pre-defined map of landmarks, then finds out the most likely current position of the robot on the field. Landmarks types including X cross, T cross, L cross, penalty point and the goal post, Due to the symmetry of the field, we use fuzzy location information to eliminate the ambiguity caused by the symmetry.3D Object position transform.

Attached file →


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Based on the ROS Kinetic robot software platform on Ubuntu 16.04, we have written a decision node to deal with the behavior that the robot should take in different states. The decision node needs to receive information about the position of the ball, goal and obstacles from the visual node, the position and orientation of the robot on the field from the positioning node, the information about the game status from the game controller node, the angular state information (yaw, pitch) of the head node and the information about the robot's motion state from the gait node. According to these information, after calculation and processing of the decision node, corresponding decisions are made, and movement commands are issued to the head node and the gait node, so that the robot can make corresponding actions in the game.

Attached file →


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

The Tsinghua Hephaestus is a RoboCup Humanoid League team running at Dept. of Automation, Tsinghua University, China, since July 2006. The team had taken part in the RoboCup2007 both in KidSize and TeenSize. Our TeenSize team got the 2nd place in RoboCup2008, the 3rd place in 2009 and 2010. From 2011, we started to participate in Adultsize competition. We got the 2nd in Robocup2012 and the 3rd place in 2011, 2013, 2014, 2018 and 2019. We Also got the 2nd place in the AdultSize Technique Challenge in 2017, 2018 and 2019.


Please list RoboCup-related papers your team published in 2019.

Luocheng Zheng, Pengfei Rao, Yingke Li, Mingguo Zhao, Admittance Control Based Humanoid Robot Standing Balance Control, IEEE International Conference on Advanced Robotics and its Social Impact(ARSO), 2019