← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

Global description file


Please give a brief summary of your walking algorithm (max. 1000 characters).

Developing a stable walk engine for a humanoid soccer robot is one of the most challenging research areas. We have employed a Linear Inverted Pendulum to model our robot in the single support phase. Using this model and with respect to a reference ZMP, the trajectory of the CoM is generated. The maximum speed of our robots is about 28 cm/s. Small disturbances are detected using the accelerometer and gyroscope sensors. We have designed a simple PD controller that compensate hip, knee and ankle angles to keep the projected center of mass on the support polygon and prevents a fall down.


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Like most of the participating teams, we use a monocular vision system. The captured image at first is fed to a segmentation module to perform semantic segmentation. Based on this segmented image the boundary of the field is determined. Then all object detection algorithms are applied only on the pixels located in the field boundary. The field lines and their intersections are detected using the Hough Transform. For ball detection, first, some coarse regions of interest that may contain a ball are extracted using the segmentation map and camera matrix. Then each region is fed to a deep convolutional neural network to predict whether the region contains a ball or not and estimate the exact location of the ball in that region


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

For self-localization, we use a population of Unscented Kalman filters that employed a set of landmarks to estimate the position of the robot. The set of landmarks that we use are field boundary and their intersections, all the lines and their intersections and also the center circle. In the init state of the play we generate 8 hypotheses for the initial position of the robot then the robot starts to scan the surroundings for landmarks. Based on these observations every hypothesis will be reweighted in an iterative process and then we start removing the less likely hypotheses until we left with one. After successful initial localization, we continue estimating the current position of the robot based on the observation of landmarks and the odometry data.


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Our Behavior system has a Finite State Machine based architecture. There are two major state machines named Body and Head. Head state machine controls the head and actually the camera of the robot by which most of the perceptions are sensed. It includes states such as scanning for the ball, tracking the ball, and etc. The actions performed by the other actuators are controlled by Body state machine which makes decisions like start walking toward the ball, approach the ball, kick the ball, take place in a defensive position , and etc which are performed by our motion module mentioning that the different decisions are made using information from different modules like vision or communication with other robots. In conclusion, there are two major Finite State Machines controlling the body and head of the robot in a collaborative order.


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)


Please list RoboCup-related papers your team published in 2019.

1) Gholami, A., Moradi, M., Majidi, M.: A simulation platform design and kinematics analysis of MRL-HSL humanoid robot. In: Chalup S., Niemueller T., Suthakorn J., Williams MA. (eds) RoboCup 2019: Robot World Cup XXIII. RoboCup 2019. Lecture Notes in Computer Science, vol 11531. Springer, Cham (2019) 2) Teimouri, M., Delavaran, M., Rezaei, M. Real-Time Ball Detection Approach Using Convolutional Neural Networks. In: Chalup S., Niemueller T., Suthakorn J., Williams MA. (eds) RoboCup 2019: Robot World Cup XXIII. RoboCup 2019. Lecture Notes in Computer Science, vol 11531. Springer, Cham (2019) 3) Teimouri, M., Salehi, M. E., & Meybodi, M. R. (2016, April). A hybrid localization method for a soccer playing robot. In 2016 Artificial Intelligence and Robotics (IRANOPEN) (pp. 127-132). IEEE. 4) Mahmudi H. et al. (2019) MRL Champion Team Paper in Humanoid TeenSize League of RoboCup 2019. In: Chalup S., Niemueller T., Suthakorn J., Williams MA. (eds) RoboCup 2019: Robot World Cup XXIII.