TKU

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

walking

Please give a brief summary of your walking algorithm (max. 1000 characters).

In order to simplify the design method, a simple oscillator-based model is used to generate biped walking gait so that modulated sinusoidal patterns are generated. The oscillator-based biped walking gait is composed of two kinds of oscillator groups: the balance oscillator group and the movement oscillator group. One balance oscillator group and two movement oscillator groups are respectively applied to generate the oscillations of the waist and two ankles. The balance oscillator group is applied to maintain the balance of the robot. Two movement oscillator groups are applied to support and to move the robot’s body. To avoid biped robot falling while it is walking, we must let ZMP located in robot’s sole of support foot. During the gait walking, the gait trajectory planning based on linear inverted pendulum model is planned. Continuous action after a step, center of mass will be away from the support foot. At this time, the action is used as the linear inverted pendulum model.

Attached file →

vision

Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Since we are a new team for the RoboCup, the thing we did just for the basic recognition part. we use the OpenCV function to identify our soccer ball with 3000 positive samples and 4000 negative samples. There are two training methods in OpenCV, one is LBP and the other is HAAR. We choose LBP because the recognition efficiency is better than HAAR.

Attached file →

localization

Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

In the localization part, we use Monte-Carlo to build our positioning system. We use particle filter with 500 particles to position our robot. The particle filter’s input is the image from the robot, and the IMU data for the turning angle. Our system divided into two parts, first part is to find the feature points of the site from the robot camera, and second part is to match the feature points with the information of the of the feature points. Finally we can get the robot position.

Attached file →

behavior

Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

In the robot behavior, our strategy divided into attacker and goalkeeper. For the attacker, we try to let our robot can know the soccer’s position and kick it to the opponent’s goal. In the attacker strategy, it can divided into three parts, the first part is the initialization strategy information; the second part is to find the position of the soccer; the third part is to walk to the soccer and kick the soccer. For the goalkeeper, This strategy divided into knowing the position of the ball and unknowing the position of the ball.

Attached file →

contributions

List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

publications

Please list RoboCup-related papers your team published in 2019.