Hamburg Bit-Bots

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description


Please give a brief summary of your walking algorithm (max. 1000 characters).

We use an open-loop walking algorithm using cartesian quintic splines based on an approach used by Rhoban[Julien Allali et al., Rhoban Football Club: RoboCup HumanoidKid-Size 2017 Champion Team Paper]. Improvements to the original approach have been made, including multi-modal sensor information to achieve a higher stability and using a general IK [ Ruppel et al. "Cost Functions to Specify Full-Body Motion and Multi-Goal Manipulation Tasks", ICRA 2018] which enables the simple transfer of the walking to any bipedal platform. The sensed pressure on the feet is used to reset the walking phase when the robot is touching the ground. The orientation of the torso, measured by an IMU, is controlled with a PID controller to prevent falling. The walking parameters can be tuned in a few minutes online using dynamic reconfigure. Path planning is achieved by using ROS move_base with a dynamic window approach. The velocity commands are transformed to foot position by the walking algorithm.


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

For ball detection, we have two approaches. Our first approach is based on fully convolutional neural networks, as presented in [Speck et al. Towards Real-Time Ball Localization using CNNs]. Another approach we use is Tiny Yolo, in that case, we also detect goalposts with the same neural network. The field border is detected using a color lookup table. The color lookup table of the green field is defined by humans before the game. During runtime certain RGB values are added to the green color lookup table. A kernel is applied and if a found pixel is surrounded by values in the mask, it is added to the color lookup table. Colors previously detected above the field are blacklisted. This also helps with natural lighting. By detecting differences between the convex hull and the detection of the field border, robots and other obstacles are detected. For line detection, we create a mask based on an HSV-colorspace white color lookup table. For more information see attached paper.

Attached file →


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

For localization we use a particle filter-based approach, fusing information on field markings, goal posts, the field boundary and field marking features like corners, t-crossings, and cross shapes. Furthermore, the localization handler initializes and reinitializes the localization with prior knowledge from the game state. It also checks the robots current state. In case it is falling, the particle filter calculation is paused since no useful information can be detected from the fallen robot. A full description of the system is provided in J. Hartfill, Feature-Based Monte Carlo Localization in the RoboCup Humanoid Soccer League, master's thesis. It can be found on our publications page (


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Our behavior is based on the dynamic stack decider. The dynamic stack is a lightweight open-source control architecture inspired by behavior trees and hierarchical state machines. For this purpose, a new domain-specific language (DSL) was developed. Decisions and actions are modules used to control the flow of the behavior. Decisions can be a simple if-statement or the decision of a neural network. An action is a task which the robot will execute, it can take multiple seconds, e.g. standing up. The actions and decisions are arranged in a stack. Decisions that were put on the stack earlier can be constantly reevaluated. If a previous decision is invalidated, everything on the stack based on this decision will be cleared. This can be useful e.g. checking if the robot has fallen. The robot will stop its behavior instantly if it detects, that it has fallen and handle standing up first.


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

Our team participated in the RoboCup competition every year since 2012. We also participated in the GermanOpen since 2012 and the IranOpen since 2014. We also take part in the RoHOW regularly since 2014, an educational and networking workshop organized by the HULKs. Our codebase, as well as the CAD designs for the robot's hardware, are open source and available at We also provide an image tagging tool to generate training data for neural networks (, currently providing 296,000 public images with 144,000 ball labels, 32,000 goal labels, and 77,000 robot labels.


Please list RoboCup-related papers your team published in 2019.

All publications are available here: M. Bestmann et al., High-Frequency Multi Bus Servo and Sensor Communication Using the Dynamixel Protocol, RoboCup Symposium N. Fiedler et al., An Open Source Vision Pipeline Approach for RoboCup Humanoid Soccer, RoboCup Symposium N. Fiedler et al., Position Estimation on Image-Based Heat Map Input using Particle Filters in Cartesian Space, IEEE MFI J. Hartfill, Feature-Based Monte Carlo Localization in the RoboCup Humanoid Soccer League, master's thesis T. Flemming, Evaluating and Minimizing the Reality Gap in the Domain of RoboCup Humanoid Soccer, bachelor's thesis J. Hagge, Using FCNNs for Goalpost Detection in the RoboCup Humanoid Soccer Domain, bachelor's thesis N. Fiedler, Distributed Multi Object Tracking with Direct FCNN Inclusion in RoboCup Humanoid Soccer, bachelor's thesis J. G├╝ldenstein, Comparison of Measurement Systems for Kinematic Calibration of a Humanoid Robot, bachelor's thesis

Attached file →