Rhoban FC

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description


Please give a brief summary of your walking algorithm (max. 1000 characters).

Our walk rely on analytical (geometric) solution of 6 degrees of freedom leg that gives joint angles according to target position/orientation for a foot. At each support swap, according to the current target speed (x,y,theta), and acceleration limits, a new target step is computed, and Hermite cubic splines are used to interpolate the target positions of both support and flying feet for the next step. This means that any change in the target order are not used until the next support swap occurs. This walk engine is still exposing a few parameters that are still hand-tuned (such as frequency, trunk height, lateral swing etc.). We also monitor force pressure sensors under the feet to check if it is consistent with what the walk engine except, avoiding to swap the support foot when the robot center of pressure is not consistent with current state and improving lateral stability. Code is available here https://github.com/Rhoban/kid_size_public/blob/master/Motion/engines/walk_engine.cpp


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Our vision system runs at around 40 Hz on 644*482 images. It is separated in two steps: first, region of interests (ROI) are identified, then classification on all the ROI is performed to analyze their content. The research of ROI is based on integral images. It looks for holes in the field of a size which is approximately the size of the ball. The size of the ROI is computed based on the estimated pose of the camera, considering the ball is on the ground. Conveniently, this approach also tend to detect region of interests on penalty marks, base of goal posts and line corners. For classification, ROI are resized in order to obtain small 16 by 16 rgb patches. We use a single layer CNN to predict the class of the patch out of 7 possibilities: - Empty - Ball - Robot - Post Base - Line corner - Line/circle intersection (X) - Line intersection (T) Despite the ridiculously small size of our neural network, we still achieve more than 90% of accuracy.


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

Our localization module is based on a particle filter which uses 3000 particles. It uses information from the referee, the vision module but also odometry in order to ensure a satisfying accuracy for high-level decision making. The information provided by the referee allows to provide a reasonable idea of the position of the robot at kick-offs, drop balls or when a robot enters the field after a game stoppage. The pressure sensors allow us to obtain a satisfying odometry, thus allowing to reduce the exploration we use on particles and improving accuracy. The visual features used to score the different particles are composed of: - base of the goal posts - line corners - line intersections - line/circle crossings


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

The behaviour of the robot is designed using finite state machines. Transition between different states are based on various information such as game status, time spent in current state or information from the localization module. Since debugging complex state machines based on all the information received by the robot is a difficult and tedious task, we can run our strategy module based on fake information. The choice of the direction for kicks is based on the use of Markovian Decision Process and takes into account a stochastic model of the various available kicks. In order to obtain collaboration between robots, each of our robot share basic information such as an estimation of its position on the field. The active robot with the lowest idea plan a high-level strategy including position of the other robots. This allows to obtain pass behaviors but also to ensure robots are staying at a reasonable distance of each other.


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

Rhoban FC participated to 8 different RoboCup in KidSize - 2019: 1st Regular tournament, 1st Drop-In, 1st Technical Challenges - 2018: 1st Regular tournament, 1st Drop-In - 2017: 1st Regular tournament, 1st Drop-In, Best Humanoid Award - 2016: 1st Regular tournament - 2015: 3rd place - 2014: Quarter finals - 2013: - 2011: First participation All our work from previous years is Open-Source: - Code: https://github.com/Rhoban/kid_size_public - Design of the robot: https://cad.onshape.com/documents/f3bdef32bffd81536fce83d1/w/5d6db2c1a5e97eed28dbc14b/e/a530b1889ee09acb5e1d7ff9


Please list RoboCup-related papers your team published in 2019.

Loic Gondry, Ludovic Hofer, Patxi Laborde-Zubieta, Olivier Ly, Lucie Mathé, Grégoire Passault, Antoine Pirrone, and Antun Skuric. Rhoban Football Club: RoboCup Humanoid KidSize 2019 Champion Team Paper. oct 2019.