ZJUDancer

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

walking

Please give a brief summary of your walking algorithm (max. 1000 characters).

Our walking algorithm can be divided into four modules. The first module is kinematic calculation module. It can calculate the solution of inverse kinematic and the forward kinematic of legs and body. The second module is interpolation module. We used cubic interpolation which is different from cubic spline interpolation. The third module is our kinematic model of the whole body of the robot. We chose a robot state to analyze the kinematics of the robot. It is the state in which the robot stands on one leg. We built a three-mass model for the robot, assuming that the mass of the robot is concentrated on three mass points, the upper body center of mass, the left leg center of mass, and the right leg center of mass. Finally, the fourth module is the waling module. The walking algorithm of the robot is divided into two parts: gait pattern generation and walking stabilizer.

Attached file →

vision

Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

The first module is image. To have a more robust vision system with sunlight, we try to get a better imaging result with changing exposure among several levels. There will be less overexposure or underexposure with the embedded auto-exposure in the camera driver. In practice, the lighting is relatively stable for seconds or minutes, calculating the histogram every program cycle will cost too much time. To balance the time cost and effect, we evaluate the cognition result first and decide whether to calculate the histogram. The second module is cognition. Cognition is a necessary part of the robots to perceive the surroundings. Our detection for the field and the lines are based on traditional computer vision techniques like hough transform, canny or simply using thresholds to get areas. Deep learning method is used for the ball, goalposts and robots.

Attached file →

localization

Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

Our localization method is based on Adaptive Monte Carlo Localization, with landmarks including circle, lines, goals, and corners. We are still trying to recognize more features in the field to enrich markers for AMCL. Unstable pose when the robot walking cases huge noise on orientation, this makes the particles hard to converge. Thus the direction information is provided by IMU, and our algorithm only estimates (x, y), the position of the robot. The particles are updated by the information from odometry and will redistribute according to the landmarks got from cognition. As our odometry is more reliable this year, we reduce the noise to make the localization result more stable, and it does make an effect.

behavior

Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Currently, our behavior module is based on a mixture of behavior tree and finite state machine, implemented in Python. It reads information from the localization module and gamecontroller module, and plans action command in each main loop. As the localization is not satisfying enough for complicate moving strategies, we choose decentralized decision making. Thus, our robots can use relative positions to make a decision which can avoid the impact of their wrong self-localization. The most important rule is that one robot will control the ball(which means go to kick the ball) until it kicks or falls to avoid continuous status changing caused by uncertain information. When one robot is controlling the ball(we call it kicker), the other attacker will assist it(we call it assistant). When the kicker kicks or falls, the assistant will soon go ahead to control the ball and become the kicker.

Attached file →

contributions

List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

In recent years, we won the second place in Hefei, China in 2015. We won the second place in Leipzig, Germany in 2016. We won the second place in Nagoya, Japan in 2017. We won the fourth place in Montreal, Canada in 2018. We won the second place in Sydney, Australia in 2019.

publications

Please list RoboCup-related papers your team published in 2019.

ZJUDancer Team Description Paper for Humanoid Kid-Size League of Robocup 2019