This paper proposes a strategy for transporting a large object to a goal using a large number of mobile robots that are significantly smaller than the object. The robots only push the object at positions where the direct line of sight to the goal is occluded by the object. This strategy is fully decentralized and requires neither explicit communication nor specific manipulation mechanisms. We prove that it can transport any convex object in a planar environment. We implement this strategy on the e-puck robotic platform and present systematic experiments with a group of 20 e-pucks transporting three objects of different shapes. The objects were successfully transported to the goal in 43 out of 45 trials. When using a mobile goal, teleoperated by a human, the object could be navigated through an environment with obstacles. We also tested the strategy in a 3-D environment using physics-based computer simulation. Due to its simplicity, the transport strategy is particularly suited for implementation on microscale robotic systems.
We propose a coevolutionary approach for learning the behavior of animals, or agents, in collective groups. The approach requires a replica that resembles the animal under investigation in terms of appearance and behavioral capabilities. It is able to identify the rules that govern the animals in an autonomous manner. A population of candidate models, to be executed on the replica, compete against a population of classifiers. The replica is mixed into the group of animals and all individuals are observed. The fitness of the classifiers depends solely on their ability to discriminate between the replica and the animals based on their motion over time. Conversely, the fitness of the models depends solely on their ability to ‘trick’ the classifiers into categorizing them as an animal. Our approach is metric-free in that it autonomously learns how to judge the resemblance of the models to the animals. It is shown in computer simulation that the system successfully learns the collective behaviors of aggregation and of object clustering. A quantitative analysis reveals that the evolved rules approximate those of the animals with a good precision
This paper investigates a non-traditional sensing trade-off in swarm robotics: one in which each robot has a relatively long sensing range, but processes a minimal amount of information. Aggregation is used as a case study, where randomly-placed robots are required to meet at a common location without using environmental cues. The binary sensor used only lets a robot know whether or not there is another robot in its direct line of sight. Simulation results with both a memoryless controller (reactive) and a controller with memory (recurrent) prove that this sensor is enough to achieve error-free aggregation, as long as a sufficient sensing range is provided. The recurrent controller gave better results in simulation, and a post-evaluation with it shows that it is able to aggregate at least 1000 robots into a single cluster consistently. Simulation results also show that, with the recurrent controller, false negative noise on the sensor can speed up the aggregation process. The system has been implemented on 20 physical e-puck robots, and systematic experiments have been performed with both controllers: on average, 86-89% of the robots aggregated into a single cluster within 10 minutes.
This paper presents a solution to the problem of self-organized aggregation of embodied robots that requires no arithmetic computation. The robots have no memory and are equipped with one binary sensor, which informs them whether or not there is another robot in their line of sight. It is proven that the sensor needs to have a sufficiently long range; otherwise aggregation cannot be guaranteed, irrespective of the controller used. The optimal controller is found by performing a grid search over the space of all possible controllers. With this controller, robots rotate on the spot when they perceive another robot, and move backwards along a circular trajectory otherwise. This controller is proven to always aggregate two simultaneously moving robots in finite time, an upper bound for which is provided. Simulations show that the controller also aggregates at least 1000 robots into a single cluster consistently. Moreover, in 30 experiments with 40 physical e-puck robots, 98.6% of the robots aggregated into one cluster. The results obtained have profound implications for the implementation of multi-robot systems at scales where conventional approaches to sensing and information processing are no longer applicable.
This thesis is concerned with the design and analysis of behaviors in swarm robotic systems using minimal information acquisition and processing. The motivation for this work is to contribute in paving the way for the implementation of swarm robotic systems at physically small scales, which will open up new application domains for their operation. At these scales, the space and energy available for the integration of sensors and computational hardware within the individual robots is at a premium. As a result, trade-offs in performance can be justified if a task can be achieved in a more parsimonious way. A framework is developed whereby meaningful collective behaviors in swarms of robots can be shown to emerge without the robots, in principle, possessing any run-time memory or performing any arithmetic computations. This is achieved by the robots having only discrete-valued sensors, and purely reactive controllers. Black-box search methods are used to automatically synthesize these controllers for desired collective behaviors. This framework is successfully applied to two canonical tasks in swarm robotics: self-organized aggregation of robots, and self-organized clustering of objects by robots. In the case of aggregation, the robots are equipped with one binary sensor, which informs them whether or not there is another robot in their line of sight. This makes the structure of the robots’ controller simple enough that its entire space can be systematically searched to locate the optimal controller (within a finite resolution). In the case of object clustering, the robots’ sensor is extended to have three states, distinguishing between robots, objects, and the background. This still requires no run-time memory or arithmetic computations on the part of the robots. It is statistically shown that the extension of the sensor to have three states leads to a better performance as compared to the cases where the sensor is binary, and cannot distinguish between robots and objects, or robots and the background.
This paper proposes a strategy for transporting a tall, and potentially heavy, object to a goal using a large number of miniature mobile robots. The robots move the object by pushing it. The direction in which the object moves is controlled by the way in which the robots distribute themselves around its perimeter - if the robots dynamically reallocate themselves around the section of the object's perimeter that occludes their view of the goal, the object will eventually be transported to the goal. This strategy is fully distributed, and makes no use of communication between the robots. A controller based on this strategy was implemented on a swarm of 12 physical e-puck robots, and a systematic experiment with 30 randomized trials was performed. The object was successfully transported to the goal in all the trials. On average, the path traced by the object was about 8.4% longer than the shortest possible path.
When a mixture of particles with different attributes undergoes vibration, a segregation pattern is often observed. For example, in muesli cereal packs, the largest particles—the Brazil nuts—tend to end up at the top. For this reason, the phenomenon is known as the Brazil nut effect. In previous research, an algorithm inspired by this effect was designed to produce segregation patterns in swarms of simulated agents that move on a horizontal plane. In this paper, we adapt this algorithm for implementation on robots with directional vision. We use the e-puck robot as a platform to test our implementation. In a swarm of epucks, different robots mimic disks of different sizes (larger than their physical dimensions). The motion of every robot is governed by a combination of three components: (i) attraction towards a point, which emulates the effect of a gravitational pull, (ii) random motion, which emulates the effect of vibration, and (iii) repulsion from nearby robots, which emulates the effect of collisions between disks. The algorithm does not require robots to discriminate between other robots; yet, it is capable of forming annular structures where the robots in each annulus represent disks of identical size. We report on a set of experiments performed with a group of 20 physical e-pucks. The results obtained in 100 trials of 20 minutes each show that the percentage of incorrectlyordered pairs of disks from different groups decreases as the size ratio of disks in different groups is increased. In our experiments, this percentage was, on average, below 0.5% for size ratios from 3.0 to 5.0. Moreover, for these size ratios, all segregation errors observed were due to mechanical failures that caused robots to stop moving.
This letter highlights a fundamental inconsistency in the formulation of the Gravitational search algorithm (GSA) (Rashedi et al., Inf Sci 2232–48, 2009). GSA is said to be based on the law of gravity, that is, candidate solutions attract each other in the search space based on their relative distances and ‘masses’ (qualities). We show that, contrary to what is claimed, GSA does not take the distances between solutions into account, and therefore cannot be considered to be based on the law of gravity.