An evolutionary robotics approach for robot swarm cooperation

An evolutionary robotics approach for robot swarm cooperation
One of the three considered learning environments, namely locomotion. In locomotion, agents learn how to navigate in the environment avoiding obstacles (dark rectangles) and other agents. Credit: Amine Boumaza.

Recombination, the rearrangement of genetic materials as a result of mating or of combining segments of DNA from different organisms, has numerous evolutionary advantages. For instance, it allows organisms to remove deleterious mutations from their genomes and take on more useful mutations.

Amine Boumaza, a researcher at Université de Lorraine, has recently tried to apply this process to online embodied evolutionary robotics, an area of robotics that focuses on replicating theories of evolution in robots. In his paper, published in the GECCO '19 Proceedings of the Genetic and Evolutionary Computation Conference journal, he developed a recombination operator inspired by evolution and trained it on three tasks that require collaboration between multiple robots.

"My research falls in the broader subject of AI, and more specifically, understanding how we can design agents that can learn to complete interesting tasks," Boumaza said. "This research topic is not new, but rather old, and it got a lot of attention lately because of the impressive results of deep learning. In my case, I am more interested in swarm robotics, where the goal is to make a large number of small robots cooperate to solve a task and adapt to changes in their environment."

Fascinated by the evolutionary strategies, particularly recombination, that better equip living organisms to face the challenges of life, Boumaza set out to investigate whether similar mechanisms could be applied to robotics approaches. His hypothesis was that if successfully replicated in robots, recombination would increase their performance and efficiency.

"When we talk about robotic agents, we generally assume a physical entity embodied into an environment (a vacuum cleaning in a room for example)," Boumaza said. "This agent perceives its surroundings using a set of sensors (obstacle sensors, camera, etc.), which can give it some kind of representation of its environment. The agent can also act in the environment using effectors (motors, arms, cleaning brush, etc.). These actions are the results of a computation that is the output of what we commonly call a (i.e. some kind of a decision program)."

An evolutionary robotics approach for robot swarm cooperation
One of the three considered learning environments, namely item collection. In item collection, agents must collect as much items (red dots) as possible. Credit: Amine Boumaza.

A controller is essentially a program that processes the perceptions acquired by a robot via its sensors and outputs commands to its effectors. In the case of a robotic vacuum cleaner, for instance, a controller would process information about its surroundings, detect whether there is dust in front of it, then produce outputs that will make the robot activate the vacuum and advance to hoover the dust.

"Taking a further step, we can also consider multiple agents that can evolve in the same environment," Boumaza said. "Designing controllers for each agent in such settings is very difficult problem for which no efficient technique exists yet. In this case, we can have few (e.g., 10 to 100) complex robots, or many very simple robots (e.g., hundreds) that interact in ways that are usually inspired from the behavior of insects; that is what we call ."

When developing a robot that can effectively complete a particular task, researchers need to design a controller that is tailored around that specific task. If the environment that the robot is meant to operate in is simple, designing this controller can be fairly easy, yet most times, this is not the case.

This becomes even harder, if not impossible, when considering multiple robots interacting in a given environment. The main reason for this is that a human developer cannot possibly predict all the situations that each robot will encounter, as well as the most effective actions for tackling each of these situations. Fortunately, in recent years, advancements in machine learning have opened up interesting new possibilities for robotics research, allowing developers to incorporate tools that enable continuous learning, essentially training the controller to deal with numerous situations over time.

"One way to design a controller in such a fashion is to use evolutionary algorithms, which, loosely speaking, try to mimic the natural evolution of species to evolve robotic agent controllers," Boumaza said. "It is an iterative process where, as animals get better adapted to their environments, the controller gets better at solving a task. The goal is not to simulate natural evolution, but rather take some inspiration from it."

An evolutionary robotics approach for robot swarm cooperation
One of the three considered learning environments, namely foraging. In foraging agents must collect items and carry them back to the a nest (one of the two black circles). The green coloured floor is a pheromone trail that adds a sens of direction, it is highly concentrated at the nest locations and less concentrated further away. Credit: Amine Boumaza.

Evolutionary robotics is merely one of the many techniques that researchers can use to design robot controllers. In recent years, however, evolutionary approaches have gained popularity, with a growing number of studies aimed at replicating evolutionary strategies observed in animals and humans.

"Evolutionary robotics has some advantages, such as that fact that we don't need to specify how to solve the task (it is discovered/learned by the algorithm), but merely need to specify a way to measure how well the is performed," Boumaza said. It also has some drawbacks, as it is a very slow and computationally intensive process, that can be very difficult to perform on real robots. In addition, these approaches are typically very sensitive to performance measures, as they condition the behavior learned by the agents."

Boumaza, like other researchers in the field, has been trying to develop new approaches to overcome the shortcomings of existing evolutionary robotics techniques. In his recent study, he specifically proposed the use of a new "mating operator" inspired by recombination, which can improve the convergence speed in robot simulations. This is a remarkable achievement, as it could ultimately reduce the time necessary to transfer an approach from simulations to real robots.

He applied his recombination operator to three collective robotics tasks: locomotion, item collection and item foraging. He then compared the performance achieved using a purely mutative version of his algorithm with that of different recombination operators. The results gathered in his experiments suggest that, when correctly designed, recombination strategies can in fact improve the adaptation of a swarm of robots in all of the tasks he considered.

In the future, the new evolutionary robotics approach he proposed could be used to enhance the performance and adaptability of robots in tasks that require collaboration between multiple agents. In the meantime, however, Boumaza plans to test his algorithm on new tasks, to determine whether the improvement he observed in the three tasks he focused on still holds.

"It would also be interesting to check if my approach can be implemented on on real robots," Boumaza said. "Theoretically nothing prevents that, except having a great number of physical robots and accepting to deal with the 'reality gap' (i.e. what we see in simulation is usually not what would happen in reality, due to the simulation simplifications. Swarm robotics is all about numbers and a single robot's failures should not hinder the swarm. Ultimately, therefore, to ascertain the validity of this approach it has to be tested in reality, on physical robots."

More information: Amine Boumaza. When mating improves on-line collective robotics, Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO '19 (2019). DOI: 10.1145/3321707.3321856

© 2019 Science X Network

Citation: An evolutionary robotics approach for robot swarm cooperation (2019, August 15) retrieved 29 March 2024 from https://techxplore.com/news/2019-08-evolutionary-robotics-approach-robot-swarm.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

An algorithm to teach robots pre-grasping manipulation strategies

149 shares

Feedback to editors