Our walking algorithm can be divided into four modules. The first module is kinematic calculation module. It can calculate the solution of inverse kinematic and the forward kinematic of legs and body. The second module is interpolation module. We used cubic interpolation which is different from cubic spline interpolation. The third module is our kinematic model of the whole body of the robot. We chose a robot state to analyze the kinematics of the robot. It is the state in which the robot stands on one leg. We built a three-mass model for the robot, assuming that the mass of the robot is concentrated on three mass points, the upper body center of mass, the left leg center of mass, and the right leg center of mass. Finally, the fourth module is the waling module. The walking algorithm of the robot is divided into two parts: gait pattern generation and walking stabilizer.Attached file →
The first module is image. To have a more robust vision system with sunlight, we try to get a better imaging result with changing exposure among several levels. There will be less overexposure or underexposure with the embedded auto-exposure in the camera driver. In practice, the lighting is relatively stable for seconds or minutes, calculating the histogram every program cycle will cost too much time. To balance the time cost and effect, we evaluate the cognition result first and decide whether to calculate the histogram. The second module is cognition. Cognition is a necessary part of the robots to perceive the surroundings. Our detection for the field and the lines are based on traditional computer vision techniques like hough transform, canny or simply using thresholds to get areas. Deep learning method is used for the ball, goalposts and robots.Attached file →
Our localization method is based on Adaptive Monte Carlo Localization, with landmarks including circle, lines, goals, and corners. We are still trying to recognize more features in the field to enrich markers for AMCL. Unstable pose when the robot walking cases huge noise on orientation, this makes the particles hard to converge. Thus the direction information is provided by IMU, and our algorithm only estimates (x, y), the position of the robot. The particles are updated by the information from odometry and will redistribute according to the landmarks got from cognition. As our odometry is more reliable this year, we reduce the noise to make the localization result more stable, and it does make an effect.
Currently, our behavior module is based on a mixture of behavior tree and finite state machine, implemented in Python. It reads information from the localization module and gamecontroller module, and plans action command in each main loop. As the localization is not satisfying enough for complicate moving strategies, we choose decentralized decision making. Thus, our robots can use relative positions to make a decision which can avoid the impact of their wrong self-localization. The most important rule is that one robot will control the ball(which means go to kick the ball) until it kicks or falls to avoid continuous status changing caused by uncertain information. When one robot is controlling the ball(we call it kicker), the other attacker will assist it(we call it assistant). When the kicker kicks or falls, the assistant will soon go ahead to control the ball and become the kicker.Attached file →
In recent years, we won the second place in Hefei, China in 2015. We won the second place in Leipzig, Germany in 2016. We won the second place in Nagoya, Japan in 2017. We won the fourth place in Montreal, Canada in 2018. We won the second place in Sydney, Australia in 2019.
ZJUDancer Team Description Paper for Humanoid Kid-Size League of Robocup 2019