The motion control package is modified based on the version of the TeenSize League 2019. Due to the change of the group, the original motion control modules such as walking, turning and kicking are retained, while the motion control module of climbing up after a fall is blocked. Specifically, the walking gait is designed based on the model of the three-dimensional linear inverted pendulum. Multi-direction and multi-speed walking gait planning is achieved based on the algorithm of Kajita et al . In terms of walking control, pressure sensing is applied to identify the support status of the feet, so that the switching timing of the next step could be determined, with which the environment adaptability of walking control could be improved. Further, we will try to apply the IMU data as feedback variables to achieve real-time walking gait planning and control. The theory of gesture estimation and gait planning based on the capture point will be tried and applied [2,3].
Our vision perception system remains the main structure of the one we used for TeenSize League 2019. Compared to last year’s system, an MYNTEYE D1000 stereo camera is utilized to obtain a real-time video stream that is used to detect key objectives. We mainly use color filtering method to achieve the detection tasks based on the prior knowledge that different objects have different colors. In order to make the vision algorithm robust to outdoor scenes where lighting condition varies, a preprocessing module is added on top of the algorithm. This self-adaptive module can adjust the filter thresholds based on the ambient brightness. Then the field image is binarized by using the Canny edge detector to get a preliminary result. The classical Hough Line Transform and the Hough Circle Transform are further applied to the binary image and detect the lines and the circle in the field. Furthermore, a Yolo-like deep learning model is used to detect the goalposts and ball.
We achieve to maintain the current global position of the robot by using a localization model shown in Fig.2. We can get its current location through the odometer algorithm. The odometer algorithm can predict the moved distance between two frames through the VINS method with the stereo camera and IMU. After a period of motion, the robot's position will become very inaccurate because of accumulated error. In order to increase the accuracy of positioning, the robot must consider using the position of the center circle and goal relative to the field is fixed. In addition, particle filtering is used to correct large errors. The particle, which gets the highest score, has the highest possibility to be the right position of the robot. Then, the global position of the robot can be obtained and updated.
Based on the behavior tree, this package processes info from the Localization package, Team Transmitter and Game Receiver, and then outputs the action commands to Motion package. Path planning algorithm: Getting the maximum rotation capacity of the robot at different forward speeds, and create the walking ability table. We further transform the robot path planning problem into a walking mode of arc plus straight line plus Omni-directional walking. As long as the solution is made according to the motion constraint and the value of the straight line segment is 0, the maximum arc radius Rmax from the starting point to the target point can be obtained. So, we can get the appropriate forward and turning speed according to the radius of the path arc during the competition.