Menu Close
The DARwIn-OP humanoid soccer-playing robot may look like a toy, but is a platform for groundbreaking artificial intelligence research. David Budden

Robots will be FIFA champions – if they keep their eyes on the ball

We already know robots manufacture cars, work in factories, even vacuum our homes - but could they form a world-beating soccer team?

The question seems like ripe pickings for a movie mogul, given Hollywood’s long-standing fascination with robots. Some movies portray a utopian picture of the future, where robots and humans live in perfect symbiosis; robots selflessly perform the mundane tasks required by their human masters, leaving society free to indulge in more rewarding activities. Others portray quite the opposite.

But in the real world it is undeniable that we are in the middle of a robotics revolution.

Soccer robots?

In case you haven’t heard of it, RoboCup is an international robotics competition formed in 1997, with the official aim that:

by mid-21st century, a team of fully autonomous humanoid soccer players shall win the soccer game, complying with the official rule of the FIFA, against the winner of the most recent World Cup.

Although this aim remains unchanged, RoboCup now includes a wider variety of robotics challenges, namely:

Each of these challenges consists of a number of individual competitions addressing different aspects of the overarching RoboCup goals. These include both low-level hardware issues (such as creating life-size robots capable of walking or running like a human), and high-level behaviour issues (how to make a robot strategise and cooperate in a team environment).

Some RoboCup competitions remove the restrictions of physical hardware (such as cameras, sensors and motors), allowing complex team strategies to be developed and tested via simulation (this is the focus of my current research with the CSIRO ICT Centre).

Others encourage the complete development of complex humanoid robots, ranging anywhere from 30cm in height to the size of an adult human.

The RoboCup soccer games work in much the same way as a regular kickabout, except the human players are replaced with robots. Teams of four from different nations around the world compete to reach the finals and become the champion.

These robots are completely autonomous - not controlled by humans at all. This means they have to be programmed to carry out the many different functions needed to be successful on the field, including movement, kicking, recovering from a fall, and recognising the ball and other players.

Different size leagues exist, with the eventual goal that these robots will become technologically advanced enough to face humans.

But during my three years with the University of Newcastle’s NUbots RoboCup team, my focus was something different – the development of systems and algorithms for computer vision.

Computer vision

A computer vision system (at least in the context of RoboCup) involves two main steps: object detection, and placing where the object is in the environment. An example could be a robot seeing a soccer ball (object detection), and then determining exactly where it exists in relation to the field of play (object localisation).

As the name suggests, object detection involves the processing of the robot’s vision stream (a set of images arriving from the camera at 30 frames a second), and searching every frame for the presence of any salient features.

In a typical RoboCup scenario, these salient features may include: the ball, goal posts, landmark beacons, field lines, penalty marks, the centre circle, other robots (both teammates and competition), and any miscellaneous obstacles (such as the legs of a referee).

The robot then knows a number of objects are around it - specifically their pixel coordinates, and any information specifying their orientation and size.

How a robot sees the world. Correctly detected balls are indicated by a blue circle, for both the simple unobstructed case (left), and the obscured case - something is in front of the ball (right). David Budden

In terms of “object localisation”,“ it may be well and good for a robot to know the pixel coordinates of an object – but unfortunately, the robot doesn’t actually reside in a 2D image plane.

In order to interact with its environment, the vision system must therefore contain methods of object localisation – the ability to project the pixel coordinates given by the ‘bot detecting the object into field coordinates - where the object exists in a 3D space.

In this stage, the robot learns the physical position and orientation of a set of objects relative to the robot. With this information, the robot can chose the correct action - whether it be to kick the ball, dive, or take up a defensive position.

These two steps may seem straightforward, but there are a lot of complications.

One issue is computational efficiency. A single image may contain as many as two million pixels, and must be searched for every possible object in less than 30 milliseconds (to maintain a frame rate of 30 frames per second, allowing the robot to remain responsive to quick soccer events).

Another issue is how to write algorithms to deal with the notion of colour. As humans, we’re used to dealing with high level concepts such as "red” or “green” in our everyday lives.

A robot just sees a pixel as a set of numbers, with each pixel taking one of 16.8 million possible colour values. How can we convert easily (and efficiently) between these two models? This is an especially important question in RoboCup, where features are traditionally colour-coded.

Robotics takes you places. The University of Newcastle’s world champion RoboCup team - the NUbots - at Teotihuacan, Mexico. David Budden

Taking a step back

In my recent paper (awarded “best student paper” at the recent 25th Australasian Joint Conference in Artificial Intelligence), I address a number of these issues – specifically in the context of ball detection.

The system takes a step back from common ball detection methods, which use the knowledge that a ball will appear circular in the image a robot sees.

This makes sense conceptually, but assuming a ball will be circular in an image is error-prone when confronted with noise to the signal the robot’s receiving (such as lens distortion or motion blur). These sorts of algorithms are also relatively inefficient.

Instead, domain specific knowledge (or the known features of the environment the robot will operate in - in this case, things like the colour of the ball and goal posts) is integrated into a straightforward series of steps: the application of machine learning algorithms to locate candidates for balls, followed by refinement via basic trigonometric operations. In other words, if an object on the field is the size and colour of a ball, it must be a ball - independent of shape.

How does this system perform? Quite well! In addition to allowing for the majority of the ball to be hidden behind other objects, the algorithm is twice as accurate at detecting the ball (even if partially hidden), and yields a 300-fold decrease in execution time over compared methods.

This is just one example of science favouring simplicity, and the benefits of “thinking outside the square” (or circle, in this case), rather than accepting the common textbook methodology for solving a well-known problem.

In their own way, advances in robotics are contributing to the enhancement of the beautiful game.

Want to write?

Write an article and join a growing community of more than 182,500 academics and researchers from 4,943 institutions.

Register now