Team RoMeLa

← Back to teams list

Team website
Qualification video
Team short paper
Hardware specifications

Software description


Please give a brief summary of your walking algorithm (max. 1000 characters).

While THOR-RD uses a hybrid locomotion controller used in our last RoboCup, ARTEMIS will primarily use Model Predictive Control (MPC) that optimizes for ground reaction force (GRF) profiles that the legs are to apply to the ground to achieve dynamic motions, which includes walking and running. Based on the current state of the robot, an optimization over a pre-defined horizon for a simplified dynamic model of the robot is run to output the desired ground reaction force to execute. This allows a robust and dynamic policy to work on hardware, as demonstrated on recent quadrupedal robots. This type of approach also allows ARTEMIS to have a flight phase, which will allow the robot to locomote considerably faster (i.e. running) than walking, which is the predominant form of locomotion in the current RoboCup.


Please give a brief summary of your vision algorithm, i.e. how your robots detect balls, field borders, field lines, goalposts and other robots (max. 1000 characters).

THOR-RD uses monocular vision (Logitech C920 HD) while ARTEMIS uses stereo vision (Intel RealSense D435 in passive stereo mode) for its vision input. Both robots deploy the same setup used in our previous participation where a look-up table (LUT) is used to classify the pixels into a pre-defined label (e.g. ball, field lines, goal posts, obstacles). This allows a supervised learning approach to be adopted to classify which pixels belong to what label. The ground truth labels are generated by a user who records ample data from the field and assigns a label to each pixel for the collected snapshots. Currently, this process is being modified such that the dataset can be efficiently generated in a semantic segmentation style using the COCO dataset format. We expect such generated data to be useful to the greater RoboCup community.


Please give a brief summary of how your robots localize themselves on the field (max. 1000 characters).

A conventional particle filter is used to track the robot’s pose and heading direction (x, y, theta). Stationary landmarks (e.g. goal posts, lines, corners) detected from the cameras are used along with the robot’s odometry and readings from the inertial measurement unit to probabilistically update the particles. Currently we are working on extending this by utilizing sample resetting localization, an approach that is adopted in the Standard Platform League, to localize more quickly and robustly in presence of large drift which may occur over an extended period of play.


Please give a brief summary of your behavioral architecture and the decision processes of your robots (max. 1000 characters).

The robot’s behavior is always centered around localization and a Boolean on whether the ball is in possession or not. Based on this, a finite state machine that determines whether to ‘attack’ (dribble forward), ‘shoot’, ‘approach’, or ‘contain’ is chosen. While ‘shoot’ is a motion, the rest are equipped with a path planner that generates a path that is to be followed for a pre-defined time period. This path planning includes obstacle (i.e. other robot) avoidance with a sufficient safety margin to avoid collisions, which also acts to avoid opposition defenders when in ‘attack’. As in the past, kick paths are also generated using a discrete search tree to find paths that try to avoid opposition robots. Implementation wise, the high-level behavior receives as inputs vision data and localization data to compute plans and pass velocity commands or kicking motion triggers to the locomotion controller which runs on a separate process.


List your previous participation (including rank) and contribution to the RoboCup community (release of open-source software, datasets, tools etc.)

Some members of Team RoMeLa have experience working in a championship team and Louis-Vuitton Cup winning teams with Team THORwIn, Team DARwIn, and Team CHARLI. In RoboCup 2014 and 2015, Team THORwIn won the AdultSize league. In 2011, 2012, and 2013, Team DARwIn won the KidSize league. In 2011 and 2012, Team CHARLI won the AdultSize competition. Dr. Dennis Hong is the PI of Team RoMeLa and is the creator of DARwIn-OP, an open-source humanoid platform that is used by many teams and has also inspired many new platforms in the competition. We hope that by participating with ARTEMIS, Team RoMeLa will again be able to inspire teams in the future with the new approach we are taking with proprioceptive actuator-based robots.


Please list RoboCup-related papers your team published in 2019.