For walking motions, a walking pattern was designed. The walking pattern follows a trajectory generated from path planning. It specifies individual trajectories for the hip center and both feet. The center of the hip moves along a sinusoidal trajectory in the XY plane while each foot follows a parabolic trajectory perpendicular to the ground. One foot is touching the ground at all times. Then, inverse kinematics is used to determine each of the motor angles at a specific time. IMU measurements are only used to balance the robot’s pose in the pitch direction. No other active control has been implemented at the moment.
Our team uses Darknet ROS package (https://github.com/leggedrobotics/darknet_ros). We currently use YOLO net v3 out of the box without any training. We plan to either train YOLO net v3 based on the publicly available dataset in the humanoid league or train a custom architecture. Fieldlines and goal posts are using a simple canny edge detector with a hough line transformation
Our robot localizes based on the field lines only. We use the canny edge detector to highlight white lines, followed by hough transform to extract line segments. The line segments are transformed from a 2D image to 3D through camera calibration and then turned into points that are passed into the AMCL algorithm to localize the robot. IMU measurements are passed through a complementary filter to aid the odometry for the robot.
Currently, our robot is programmed to search for the ball, walk towards it and kick it towards the goal.
Open-source repository found here https://github.com/utra-robosoccer/soccer_ws