← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description

Global description file


Please give a brief summary of your walking algorithm (max. 1000 characters).

For walking and kicking, we use the ZMP-based algorithms described in [1]. We augment these algorithms with gravity compensation and a torso stabilization controller. Since these techniques strongly rely on dynamics models, we developed a system identification technique to obtain a better mass distribution model experimentally through foot pressure sensors measurements instead of using data from CAD files. However, we have not executed this process in the Chape robot yet. The main innovation regarding motion control in 2019 was the development of a new kicking algorithm based on splines. In our previous motion, the kicking foot was constrained to be always parallel to the ground, which resulted in weak kicks. For more information, please refer to the Global system description. [1] Marcos R. O. A. Maximo. Omnidirectional ZMP-Based Walking for a Humanoid Robot. Master's thesis, Aeronautics Institute of Technology, 2015.


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

Our current approach on computer vision is based on Nimbro team’s 2015 paper [1]. This approach uses a convex hull algorithm to detect the field and a hough line detector for the field lines. We do not detect the penalty cross due to the amount of false positives encontered. The ball and goalposts are detected using a convolutional neural network algorithm. To detect the white ball and the field’s goalposts, we have a convolutional neural network based in the model described in YOLO papers. From the original FastYOLO architecture, we decreased the number of filters in each layer, for our robot has a limited processing power. [1] H. Farazi et al. A Monocular Vision System for Playing Soccer in Low Color Information Environments. [2] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection.


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

To solve the global localization problem, we use a standard particle filter (Monte Carlo Localization), as described in [1, 2]. The landmark ambiguity makes initializing the filter using a uniform distribution risky, since the filter may converge to the wrong side of the field. Therefore, at the beginning, our algorithm distributes the particles between the 4 possible starting poses in the soccer field, as stated in the rules. Then, resampling is disabled while the head does a 180º scan, accumulating information from the whole scan in the particles weights before the first resampling. For more information, please refer to the Global system description. [1] W. Burgard S. Thrun and D. Fox. Probabilistic Robotics . MIT Press, 2005. [2] Alexandre Muzio, et all. Monte Carlo Localization with Field Lines Observations for Simulated Humanoid Robotic Soccer. In: Latin American Robotics Symposium. In Proceedings of the 2016 Latin American Robotics Symposium (LARS), October 2016.


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

Our decision making is based on a tree of behaviors. The execution begins at a very high level behavior depending on the robot’s role, such as "Attack", and goes to lower level behaviors, such as "Position to Kick", until arriving at the behaviors which will actually request actions to the control module. Some behaviors are modeled as finite-state machines. Our head policy switch between scanning the field for localization features and tracking the ball (when it has been seen). For navigation, we are using potential fields [1]. Since we have been focusing on developing the low skills of our robots in the last years, we still lack some basic mechanisms, such as positioning and marking. However, we expect to transfer some of these techniques from our Soccer 2D and Soccer 3D code bases. Our robots lack cooperation for now, but we are working on implementing a simple centralized cooperative decision making. [1] T. Hellstrom. Robot Navigation with Potential Fields.


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

- Top 8 - RoboCup 2019; - 1st place - Latin America Robotics Competition (LARC) 2019; - Top 8 - RoboCup 2018; - 1st place - Latin America Robotics Competition (LARC) 2018; - Top 16 - RoboCup 2017.


Please list RoboCup-related papers your team published in 2019.

[1] Francisco Azevedo, Daniela Vacarini, Marcos Maximo, and Maurício Donadon. Innovative Foot Pressure Sensor Design for Humanoid Robots. In: Proceedings of the 2019 International Congress of Mechanical Engineering (COBEM). Uberlândia, Brazil. [2] Caroline Silva, Daniela Vacarini, Davi Barroso, Marcos Maximo, and Luiz Góes. Three-Dimensional Identification of a Humanoid Robot. In: Proceedings of the 2019 International Congress of Mechanical Engineering (COBEM). Uberlândia, Brazil. [3] Thiago Tonaco, Daniela Vacarini, Caroline Silva, Marcos Maximo, and Mariano Arbel. Humanoid Robot Leg Design. In: Proceedings of the 2019 International Congress of Mechanical Engineering (COBEM). Uberlândia, Brazil. [4] Caroline Silva, Marcos Maximo, and Luiz Góes. Height Varying Humanoid Robot Walking through Model Predictive Control. In: Proceedings of the 2019 Latin American Robotics Symposium (LARS). Rio Grande, Brazil.