← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description


Please give a brief summary of your walking algorithm (max. 1000 characters).

In order to generate a foot step pattern, the walking parameters such as step length, step time and the supporting leg of the step must be determined. Based on the behavior control algorithm specified walking parameters are generated. ALICE is using ZMP based Preview walking method. Preview control method can increase the stability of the controller by compensating the real value before the preview time. By implementing preview control, the ZMP can be controlled to stay more stable inside the support polygon. The flow chart of the controller is shown in Figure 2. However, the standard square shaped ZMP reference curve creates a rapid movement that requires high torque and the rapid displacement results in unstable walking. Thus, fifth-order polynomial is used to produce a smooth reference ZMP. The difference of the regular preview control and fifth-order polynomial reference ZMP is shown in Figure 3. In addition, inverted pendulum model is used as a simplified dynamic model for ALICE.

Attached file →


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Deep Learning: For real time processing, we need to show more than 15fps performance, so we selected yolo v3 tiny model with fast inference. The model has 4 classes (robot, soccer ball, goal post, center circle). The amount of data collected on site is about 10k and it is used after data augmentation. Fig 1 shows the object detection results of the final trained model. ZED Stereo camera: To get the RGB frame and depth data from the Zed camera, the open source library zed-ros-wrapper is used. Fig 2 shows the results of visual localization. The red dot on the right map represents the absolute coordinates calculated from the depth value of the center circle. Recognition: To use the depth of the object, the camera axis transformed to the robot axis. Fig 3 and 4 show that the field is recognized by converting the RGB to HSV and then setting the upper bound and lower bound. Fig 5 shows the yellow box used to calculate the depth of the goalpost. Fig 6 is the vision software flow chart.

Attached file →


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

Localization consists of Line Intersection Point Detect (LIPD), and Particle Filter. LIPD is the method to find the white line intersection point of soccer field. 1. Extracting soccer field from the camera image by HSV color detection method. 2. Extracting lines using the Hough Transform algorithm from the extracted soccer field. If the surrounding color of the extracted line is white, the line is determined as the soccer field line. 3. Intersection points are calculated. 4. By using the current position and localization type the points are identified. Particle Filter is estimating the robot position by using particle filter sensor resetting localization (SRL) algorithm. The SRL is the algorithm that relocates the particles based on sensor information when tracking failure occurs. This algorithm is especially used to solve the problem such as severe drift and the small number of landmarks. Also, it increases the success rate of location tracking and make fast locale localization.

Attached file →


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Alice's control is repeating process following three steps: “Input”, “Think”, and “Act”. “Input” collects and stores all the data coming from the referee fox, the vision data and IMU data. "Think" determines what to do based on the "Input" data. When the Referee Box's match data is not going or if the robot is penalized, the goal of robot is to stand still. When it is Set and time limit is not exceeded, the robot will move to their destination. When it is Ready, the robot stops moving and look around. When it is Play and the robot is defender, it will be waiting for 10 seconds or waiting for the ball to get out of its position. The top priority of robot during play is to reach to the ball. The next priority is to turn to the goalpost. When the ball and the goalpost are placed and the ball is close enough, the robot will kick the ball. When the score is accepted, the robot will go back to its position. "Act" is to send the commend to every motor, so that the robot can move.

Attached file →


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

In 2018, 2nd place (out of 4) in Round robin Group A, lost game in QuarterFinals. In 2019, 5th place (out of 5) in Round robin


Please list RoboCup-related papers your team published in 2019.