NimbRo

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

Global description file

walking

Please give a brief summary of your walking algorithm (max. 1000 characters).

The gait of our robots is based on an open-loop pattern generator that calculates joint states based on a gait phase angle, whose rate is proportional to the desired step frequency. The phase angle is responsible for generating arm and leg movements such as lifting and swinging. We have built around this approach and incorporated corrective actions based on fused angle feedback. We use Bayesian optimization methods to find proper values for the Fused Feedback controller used by the gait. To minimize hardware wear-off, this optimization does not only take place in the real world but highly exploits information gained through the included Gazebo simulator. This approach has been previously applied to the igus Humanoid Open Platform robot and is now utilized on the AdultSize platforms.

vision

Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Our visual perception pipeline improved significantly. Thanks to our new unified perception convolutional neural network (NimbRoNet2), we now can reliably perceive the environment in extremely low and very bright lighting conditions. The visual perception system can recognize soccer-related objects, including a soccer ball, field boundaries, robots, line segments, and goalposts through the usage of texture, shape, brightness, and color information. Our deep-learning-based visual perception system is robust against the brightness, viewing angles, and lens distortions. To achieve this, we designed a unified deep convolutional neural network to perform object detection and pixel-wise classification with one forward pass. After post-processing, we managed to outperform our previous non-deep learning approach to soccer vision as well as our previous deep-learning-based model. Our perception system is also able to track and identify our robots.

localization

Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

Localization of the robot on the soccer field—the task of estimating the 2D pose (x, y, θ) of the robot—is performed using the field line, center circle, and goal post detection. Each component of the 2D pose is estimated independently. To estimate the θ component, we keep track of initial orientation and maintain an internal correction term based on the angular deviation between the expected and detected orientations of the white lines. This approach does not rely on having an accurate gyroscope output, and in experiments was able to correct deviations up to 10(deg) coming from the gyroscope. Using the estimated θ, which is normally quite exact, we can rotate every vision detection to align with the global field coordinate system. The detected line segments can thereby be classified as being either horizontal or vertical field lines. In each cycle of the localization node, we use the perception information and dead-reckoning walking data to update the previously estimated 2D location.

behavior

Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Teams participating in the AdultSize class in RoboCup 2019 were composed of a maximum of two robots. We define dynamic Player Tasks, which are frequently reassigned during the game. This task tells the robot what it is supposed to do according to its own state in the field and the state of its teammates. In addition, we define a task manager which is in charge of the safe assignment of these tasks. A robot with the Attack task has active interaction with the ball. With this task, the robot is able: i) to block opposite direct shots, ii) to be ready for one-vs-one fights, and iii) to get possession of the ball in case the previous attacker is taken out of the match. In possession of the ball, the robot will try to score either by kicking directly or by dribbling to get a better position for kicking the ball. The robot will also reach the ball and search for the ball in case the robot does not possess it.

contributions

List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

In the Humanoid League, team NimbRo has a long history. From 2009 until 2013, our TeenSize robots won the tournament each year consecutively, also obtaining the technical challenges trophy in the years 2012 and 2014. In 2015, we won the RoboCup Design Award. The following year, our team was awarded the first International HARTING Open Source Prize and won the TeenSize league. In 2017, team Nimbro held the title in its last participation in the TeenSize category. In the same year, the AdultSize category introduced regular one vs. one game, which encouraged team NimbRo to participate in this category, which resulted in winning the AdultSize tournament, drop-in games, and technical challenges. Additionally, the newly designed NimbRo-OP2 won the RoboCup Design Award. In 2018, team Nimbro AdultSize received all of the possible awards by winning the main tournament, drop-in games, technical challenges and finally the Best Humanoid Award. In RoboCup 2019 in Sydney, we repeated this feat.

publications

Please list RoboCup-related papers your team published in 2019.

Anna Kukleva, Mohammad Asif Khan, Hafez Farazi, and Sven Behnke Utilizing Temporal Information in Deep Convolutional Network for Efficient Soccer Ball Detection and Tracking [PDF] In Proceedings of 23rd RoboCup International Symposium, Sydney, Australia, June 2019. Hafez Farazi, Grzegorz Ficht, Philipp Allgeuer, Dmytro Pavlichenko, Diego Rodriguez, Andre Brandenburger, Mojtaba Hosseini, and Sven Behnke NimbRo Robots Winning RoboCup 2018 Humanoid AdultSize Soccer Competitions [PDF] [BIB] In: RoboCup 2018: Robot World Cup XXII. LNCS 11374, pp 436-449. Springer, 2019.