I-KID

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

Global description file

walking

Please give a brief summary of your walking algorithm (max. 1000 characters).

At present, the gait parameters of the robot are divided into single-step operation parameters, inter-stride connection parameters and leg structure parameters . single-step operation parameters are the key frames needed to set the robot to run when executing a certain gait. The step connection parameter is the transition calculation of the key frame parameters, and the leg parameters are combined to keep the robot stable during the movement of the two key frames.

Attached file →

vision

Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

This year we use a new network for performing feature extraction which called YOLOv3. YOLOv3 is the third object detection algorithm in YOLO (You Only Look Once) family. It improved the accuracy with many tricks and is more capable of detecting small objects. It has fifty-two convolutional layers and one connection layer, so we can call it Darknet-53. We use convolutional weights that are pre-trained on our own images during testing.

Attached file →

localization

Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

The robot locates itself through the signpost information detected by its vision sensors. Global positioning requires the robot to self-position first, and the relative position of the ball is transformed to the global coordinate system according to the position of the robot.The signage mainly includes the sideline of the pitch and the key points of the pitch. We use particle filter which is widely used in robot to locate the robot. When road signs cannot be detected effectively, we use the tracking method to obtain the robot's posture based on the odometer and the visual compass on the robot.

Attached file →

behavior

Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

The robot's behavior is inspired by the hierarchical state machine (HSM) programmed by XABSL[1]. The whole is made up of multiple options. The selection tree for the robot is shown in Fig.2. In this structure, the decision choice of the robot means that the basic behavior is activated. After entering the root option, activate one of its sub-options and give corresponding parameters to the sub-option, and then recurse down until the lowest option is activated. In each option, including an initial state (initial state) and many other states, in the initial state when this option is enabled, the rest of the state of the decision tree, the decision tree made from the state to jump to other state conditions, the decision tree for the basis of a judgment is made by his father is passed to the parameters and the sensor information, etc.

Attached file →

contributions

List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

publications

Please list RoboCup-related papers your team published in 2019.