← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

Global description file


Please give a brief summary of your walking algorithm (max. 1000 characters).

In designing the robot's walking algorithm, we use an open-loop system. We use this algorithm since RoboCup 2018 in Canada. We used sinusoidal trajectories and computed all joints of the robot’s leg using inverse kinematics. We implemented optimization on both arms based on the accelerometer and gyroscope sensor values from sub-controller CM740 by using the PD controller to stabilize the robot. We also implemented optimization on hip joints based on the speed of the movement of the robot. To obtain a balanced gait obtain, we have to adjust some parameters of the robot‘s walking algorithm using trial and error. We have some problems when implemented this algorithm such as it is time-consuming, hardware damage is unavoidable, and this system is less stable causing the robot often fall when playing in a match. Therefore we try to use a load cell sensor and develop a simulation to make the tuning process easier and reduce hardware damage.

Attached file →


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Last year, our robot vision was already accurate and worked well in a closed room. Our vision weakness was it did not run well outdoors. This year, we have two alternatives to handle the problem and perform vision better in our robot. The first option, to detect the ball, we use the Local Binary Pattern (LBP). To detect the goal post, we use the hough transform. This method’s weakness is noises outside the field are recognized as a ball. To ignore those noises, the color of the field is segmented based on the color. From the detected contour, the largest contour was selected and then Convex Hull was performed. After that, LBP is used on the object inside convex. Another option, MobileNet V1 architecture is used to detect objects like balls, goals, and line features on the field. To Implement a Convolutional Neural Networks (CNN) in our robot, Google’s TensorFlow and Single Shot Multibox Detector (SSD) algorithm with our trained model are used to test object detection in CNN models.

Attached file →


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

Our robot does localization by estimate change in position over time. but to do that, first, we need to set the initial position that has been defined previously. After the robot knows its initial position in the field, then the robot estimates its position based on the estimation of its step. But using this method has a weakness because a small error in estimation could lead to a larger error. To solve that problem, we proposed two approaches to get a better estimation of the robot position. The first approach involves a forward kinematic calculation of the robot's foot end position while the second approach involves the usage of particle filters using features of the field.

Attached file →


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Throughout the last 2 years of our participation in RoboCup, we have succeeded well in terms of implementing teamwork between robots, robots can share tasks based on the position of the robot on the field. This division of tasks is done when the game state is set, or when the robot is entering or leaving the field. However, based on this year's RoboCup regulation, striker and defender robots are no longer allowed to be put manually. Therefore, we start to eliminate assignment as striker directly when kickoff state. We also divide the tasks as a striker and defender by considering the relative position of the robot against the two goalposts. However, this task changes automatically when there is one player who exits the field.

Attached file →


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)


Please list RoboCup-related papers your team published in 2019.