Academia.eduAcademia.edu

Swarm-like Methodologies for Executing Tasks with Deadlines

2012, Journal of Intelligent Robotic Systems Theory Applications

Very few studies have been carried out to test multi-robot task allocation swarm algorithms in real time systems, where each task must be executed before a deadline. This paper presents a comparative study of several swarmlike algorithms and auction based methods for this kind of scenarios. Moreover, a new paradigm called pseudo-probabilistic swarm-like, is proposed, which merges characteristics of deterministic and probabilistic classical swarm approaches. Despite that this new paradigm can not be classified as swarming, it is closely related with swarm methods. Pseudo-probabilistic swarm-like algorithms can reduce the interference between robots and are particularly suitable for real time environments. This work presents two pseudo-probabilistic swarm-like algorithms: distance pseudo-probabilistic and robot pseudoprobabilistic. The experimental results show that This work has been partially supported by project DPI2008-06548-C03-02 and FEDER funding.

J Intell Robot Syst (2012) 68:3–19 DOI 10.1007/s10846-012-9666-9 Swarm-like Methodologies for Executing Tasks with Deadlines José Guerrero · Gabriel Oliver Received: 8 March 2011 / Accepted: 1 March 2012 / Published online: 15 March 2012 © Springer Science+Business Media B.V. 2012 Abstract Very few studies have been carried out to test multi-robot task allocation swarm algorithms in real time systems, where each task must be executed before a deadline. This paper presents a comparative study of several swarmlike algorithms and auction based methods for this kind of scenarios. Moreover, a new paradigm called pseudo-probabilistic swarm-like, is proposed, which merges characteristics of deterministic and probabilistic classical swarm approaches. Despite that this new paradigm can not be classified as swarming, it is closely related with swarm methods. Pseudo-probabilistic swarm-like algorithms can reduce the interference between robots and are particularly suitable for real time environments. This work presents two pseudo-probabilistic swarm-like algorithms: distance pseudo-probabilistic and robot pseudoprobabilistic. The experimental results show that This work has been partially supported by project DPI2008-06548-C03-02 and FEDER funding. J. Guerrero (B) · G. Oliver Departament de Matemàtiques i Informàtica, Universitat de les Illes Baleares, Cra. de Valldemossa, Km. 7.5, 07122, Palma, (Balears), Spain e-mail: jose.guerrero@uib.es G. Oliver e-mail: goliver@uib.es the pseudo-probabilistic swarm-like methods significantly improve the number of finished tasks before a deadline, compared to classical swarm algorithms. Furthermore, a very simple but effective learning algorithm has been implemented to fit the parameters of these new methods. To verify the results a foraging task has been used under different configurations. Keywords Multi-robot · Task allocation · Swarm-like · Pseudo-random swarm · Learning 1 Introduction Multi-robot systems can provide several advantages compared to single-robot systems, for example: robustness, flexibility and efficiency. To make the most of these potential benefits, some problems have to be solved, specially in scenarios with real time constraints. Of all the issues reported in the specialized literature, this paper focuses on the methods to select the best robot to execute a task, which is commonly referred to as the ‘MultiRobot Task Allocation’ (MRTA) problem. Moreover, special attention is paid to the tasks that have to be fulfilled before a deadline. Two main paradigms have been proposed in recent years to manage task allocation: swarm and auction methods. On the one hand, swarm systems are inspired 4 by insect colonies’ behavior, such as bees or ants, where a global action emerges from the interaction between very simple entities. Whereas, on the other hand, auction methods are based on negotiation processes between robots that, in most cases, require complex communication skills. It is generally accepted that auction systems provide better results than swarm methods in terms of the number of tasks executed, as has been shown in [16]. Little work has been done to analyze how well the current swarm systems fit real time scenarios, that is, missions where the tasks must be executed before a deadline. The interference between robots, produced when two or more of them select the same task to execute, is one of the main problems of the swarm systems. This paper proposes a new approach closely related to swarm solutions, called pseudo-probabilistic swarm-like (PSW), to reduce this interference. First, in the PSW approach each robot probabilistically selects a set of tasks of interest, then it selects the best of these tasks. Hence, this is neither a classical probabilistic response threshold [3] nor a deterministic algorithm [1], where each robot always selects the best task from its point of view. This is the first hybrid methodology that combines characteristics from both current swarm approaches. As will be explained later, PSW requires a very simple central agent to carry out some tasks, so it can not be classified as swarming but, as the classical swarm approaches do, PSW is distributed and each robot selects the next tasks to execute by itself. Hence, the PSW algorithms are not called swarm but swarm-like approaches. Two PSW algorithms have been implemented: distance PSW (PSW-D) and robot PSW (PSW-R). With the PSW-D algorithm each robot uses the classical response threshold probability, already implemented in other swarm methods, to preselect the tasks. Thus, it does not require any communication between robots. By contrast, the probabilistic initial selection in the PSW-R needs the position of the robots, so, each robot has to broadcast its position to the others. Then, both pseudo-random swarm-like methods choose the best task among the pre-selected ones as the next one to execute. Finally, to fit the parameters of the PSW-R algorithm, a new and very simple learning J Intell Robot Syst (2012) 68:3–19 algorithm is presented. The experiments and simulations carried out show that the new methods proposed here increase the number of tasks that meet the deadline compared to the classical probabilistic and deterministic swarm algorithms. This paper also compares the swarm-like algorithm results to the performance of three auction strategies: Sequential Unordered Auction (SUA), Earliest Deadline First Auction (EDFA) and Sequential Best Pair Auction (SBPA). The auction algorithms require complex communication mechanisms and need greater computational capabilities compared to the swarm methods. The experimental results prove that the auction approaches, compared to pseudo-swarm algorithms, reduce the total length traveled by the robots but there are not any significant differences in the total number of finished tasks when no deadline is used. In scenarios with time restrictions, the benefits provided by the auctions are greater, but the pseudo-swarm methods are still better than classical swarm algorithms. Thus, this study builds on both our previous work [14] and other authors’ work [16] with new scenarios and MRTA approaches. A classical foraging task has been used to verify our methods. In this mission, each object has to be gathered before a specific deadline. The performance measure used to compare the systems is the number of tasks finished before its deadline and the total path travelled by the robots. A simulator developed by the authors and the wellknown and widely-accepted Player/Stage framework have been used to execute all the simulations. Most of the experiments involving five or less robots have been partially reproduced with real mobile robots to guarantee the reliability of the simulations. The rest of this paper is organized as follows: Section 2 reviews the relevant work in MRTA with special attention paid to swarm and auction methods; Section 3 presents a formal definition of the problem to solve and details the real time foraging task; Sections 4 and 5 explain the MRTA algorithms implemented; special attention is paid to pseudo-swarm approaches. The experimental results are shown and analyzed in Section 6 and, finally, the conclusions and future work are presented in Section 7. J Intell Robot Syst (2012) 68:3–19 2 Previous Work Many of approaches have been proposed for solving the multi-robot task allocation problem. Some of them are based on centralized paradigms, that is, all the information is sent to a central agent who also makes all decisions executing optimization algorithms [17, 21, 23, 26]. The solution given by centralized methods can be near the optimum assignment, but they have several problems such as: single point of failure, a very high computational complexity, that makes them not suitable for dynamical environments, or a lot of communication requirements. Thus, nowadays, swarm intelligence and auction based methods are the MRTA methodologies mostly used. Swarm methods are inspired by insect colonies’ behaviour, such as bees or ants, where a global action emerges from the interaction between very simple entities. In general, the swarm systems do not need communication protocols to coordinate the robots, but the complexity of the tasks they can carry out is strongly limited. One of the most commonly used swarm approaches is the response threshold, where each robot has a stimuli associated with each task it has to execute. Some response threshold systems such as [3, 25] use the stimuli and the threshold value for calculating the probability of executing a task, that is, they are probabilistic or non-deterministic algorithms. Some other authors, such as [1, 19], apply the response threshold concepts together with a deterministic selection of the task, that is, when the stimuli exceeds the threshold, the robot immediately starts the execution of that task. As far as we know, there is no mechanism that combines both approaches; deterministic and probabilistic response threshold. Besides, very little work has been done to test swarm systems in real time scenarios where the tasks must be executed before a deadline. An exception is the Oliveria’s et al. study [8] where a set of agents must execute tasks with deadlines. This approach has not been tested on robots and requires the establishment of a lot of parameters. In [9] Acebo and de la Rosa allow tasks with real time restrictions but their system needs some very complex communication mechanisms, much more complex than the approaches proposed in our study. 5 The auction algorithms [10, 11] are based on an explicit communication protocol to coordinate the robots’ actions. In this kind of systems, the robots act as selfish agents bidding for the tasks. The bids are adjusted to the robots’ capacity to carry out the goal. Then, the robot with the highest bid, that is the best robot, wins the auction process and gets the task. Thus, the auction systems provide better solutions than swarm approaches but also require higher communication and computational capabilities. As the swarm systems, few auction approaches such as [5, 15, 18, 20], can carry out tasks with deadlines and none of them compare their results to swarm approaches. This comparative study is made in [16] by Kalra and Martinoli where the deterministic response threshold methods always outperformed the probabilistic swarm approaches. This paper will show that a combination of probabilistic and deterministic selection improves both established methods. 3 General Task Description In this section, we will formalize the task allocation problem sketched earlier and we will explain the main difficulties it presents. We have a set of tasks T = {t1 , t2 , ..., tm } and a set of robots R = {r1 , r2 , ..., rn }, where in general n = m. An allocation can be represented by a set ′ T A = {C0 , ..., Cn }, where Ci = {ri , T i }, T i ⊆ T is the set of tasks assigned to robot ri and T 0 is the subset of tasks without an assigned robot. The robots without any assigned task do not appear in T A, therefore n′ ≤ n. Moreover, each task ti has to be finished before a specific time, DLi , called deadline. A valid assignment T A has to verify the following characteristics: n i – i=0  T j = T. i – T T = ∅ ∀ i = j. That is, each task can only be assigned to one robot. – If j > 0, All tasks in T j have to meet the deadline. Besides, in this study each robot can only execute one task, therefore, |T i | = 1 ∀i > 0. In our previous paper [14], this restriction was not stated, but PSW methods were not studied there. 6 To validate the algorithms, the classical foraging like task has been used. This task is defined as follows: some randomly placed robots have to pick up some objects randomly placed in the environment. For all the strategies, there is a central agent that acts as user and which receives the information about all the tasks in the environment. For the auction approaches, this central agent also acts as auctioneer and decides which robot has to execute each task. For the swarm approaches, the central agent only broadcasts the information about the tasks to all the robots. Thus, each robot has a list with all the information (position and deadline) about all the available tasks. A task is only added to the list if the robot can carry it out before its deadline, otherwise, the task is rejected. To know the time required to execute a task the robots use the distance to the task and their kinematic characteristics. The central agent also advises all the robots when a task has been finished by any of them, and therefore when a robot has to delete a task from its lists. After finishing a task or when a new task is received, the robots without an assigned task select the next one to execute from their list. This selection will depend on the specific swarm-like task allocation method used. Hence, with the swarm methods the central agent does not make any decision about the allocation and, after sending the information about the tasks by the central agent, the process is decentralized. Note that, despite that the central agent, the PSW methods still meet the distributed characteristic of swarm approaches. The central agent must be seen as a user who decides when a task is finish or provides new tasks to the system but it does not make any decision about how to assign the tasks. In a more realistic scenario, this agent could be also a sensor which detects the objects to gather. Moreover, even if the central agent has all the information about the robots’ position and tasks, the decision making process would still be distributed and it would be running in parallel on all robots. Other swarm methods, such as [2], also assume the existence of some degree of centralized knowledge about the tasks to do. Finally, the communication protocol is simplified thanks to the User Datagram Protocol (UDP), that allows the robot to start the process without needing to establish communication in the first place. J Intell Robot Syst (2012) 68:3–19 4 Swarm Task Allocation Methods In this section we will explain the swarm-like task allocation methods used to carry out the mission explained earlier. As a baseline, two classical swarm algorithms will the presented: nearest first swarm (NFS) and probabilistic response threshold (RTH). Then, we will explain the new pseudorandom swarm-like methods: PSW-D and PSWR, together with an adaptive/learning algorithm to fit their parameters. 4.1 Nearest First Swarm With the nearest first swarm strategy (NFS) each robot selects the nearest available task, that is, the available task that minimizes the distance to the robot, therefore NFS is a deterministic algorithm. To select a task, the distance between the robot and the task must be lower than a value D, in order to avoid assigning a robot to a very distant task. This is the same parameter already used by other authors in [16] without deadlines. This is a very simple method that does not require great communication capabilities for the robots because they do not have to send any message and only need to receive the information from the central agent. This simplicity results in a lot of problems, the main one is the interference between robots. In this case we state that two or more robots interfere with each other if they select the same task to execute. Thus, the more interference there is, the more time needed to execute the tasks. The complexity of this selection algorithm is O(m), where m is the number of tasks. The complexity of the communication system is equal to O(m) because the central auctioneer has to send the information of the m tasks to all the robots. 4.2 Probabilistic Response Threshold This probabilistic method is based on the classical response threshold algorithms [1, 25] (RTH). In this method, given a robot r and a task t, a stimuli sr,t is defined that represents how suitable t is for r. When sr,t exceeds a given threshold (θr ), the robot r starts to execute the task t. To avoid relying on the threshold value to an excessive degree, the task selection is usually non-deterministic [1, 3]. J Intell Robot Syst (2012) 68:3–19 7 Thus, a robot will select a task to execute with a probability Pr,t equal to: Pr,t = n sr,t 4.3 Pseudo-random Swarm-like Algorithms This section explains the new pseudo-random swarm-like approach proposed in this paper. As already stated, PSW algorithms combine the deterministic swarm algorithm characteristics with probabilistic response threshold features. Hence, the PSW combines both the NFS and the RTH algorithm, explained above. In this paper two PSW methods have been implemented: distance PSW (PSW-D) and robots PSW (PSW-R). These new methods have been introduced because NFS produces a lot of interferences between robots, because all of them execute the same task allocation algorithm and do not have any information about the decision made by other robots. The RTH classical algorithm reduces the NFS interference level thanks to its pseudo-random probability Pr,t (see Eq. 1). For example, if two robots (r1 , r2 ) are located at the same point (or very near) and execute NFS, both will try to carry out the same task. On the other hand, if RTH is used, the probability of both selecting the same task will be 2Pr,t and therefore the conflicts will be reduced. The use of this random approach also means that there is a non-null likelihood of selecting a very bad task from the robot point of view. The PSW algorithm solves this problem discarding very far tasks thanks to the line 4-6 of the Algorithm 2. Thus, the PSW can solve the conflicts due to the execution of NFS and ensures that the selected task is good enough for the robot. (1) n sr,t + θrn As demonstrated, if sr,t = θr , the probability of executing a task is equal to 0.5. Figure 1 shows Eq. 1 values as a function of sr,t for several values of the exponent n and θr = 50. In the experimental phase of this work, and to reproduce the conditions used by other authors [16, 25], n will be always equal to 2. Algorithm 1 shows the implementation of the RTH used in this paper, where random(0..1) is a function that returns a random value between 0 and 1. The computational complexity of the RTH algorithm is O(m), where m is the number of tasks. Although it has a similar complexity to the NFS algorithm, if a robot eventually selects a task, the number of iterations can be much lower. Algorithm 1 Probabilistic response threshold (RTH) for the robot r Require: T=List of unassigned tasks 1: for all t ∈ T do 2: if Pr,t > random(0..1) then 3: return t {Start to execute task t} 4: end if 5: end for 6: return null Fig. 1 Pr,t values (Eq. 1) as a function of the stimuli for several n values with θr = 50 1 n=1 n=2 n=3 n=4 n=8 0.9 0.8 Pr,t (s) 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 Stimuli (s r,t ) 70 80 90 100 8 J Intell Robot Syst (2012) 68:3–19 In all PSW methods each robot r selects, in a pseudo-random way, a subset of tasks. Then, among the pre-selected tasks, the nearest one to the robot is finally selected. Algorithm 2 shows the generic PSW process, where line 3 is similar to the RTH selection and lines from 4 to 6 is the deterministic task selection. Thus, if m is the number of tasks, the computational complexity of this algorithm is equal to O(m). Depending on how the probability Pr,t is calculated, the algorithm implements the PSW-D or PSW-R. Algorithm 2 PSW algorithm for the robot r Require: T=List of unassigned tasks 1: tbest ← null 2: for all t ∈ T do 3: if Pr,t > random(0..1) then 4: if t is nearer than tbest is then 5: tbest ← t 6: end if 7: end if 8: return tbest 9: end for On the one hand, the PSW-D method calculates the Pr,t value following the response threshold (see Eq. 1). In this case, the stimulus sr,t is equal to the inverse of the distance between the task t and the robot r. On the other hand, the PSW-R method, uses the number of robots to make this decision. Thus, the probability of selecting a robot is: Pr,t = 1 Nar (2) Where Nar is the number of robots around robot r, the limit of the area whereby a robot is considered ’around’ is parameter A. Therefore, Pr,t represents the probability of interference between two or more robots. The lower value A is, the closer PSW-R behaviour to NFS is, so the PSWR and NFS give the same results when A = 0 and NFS’s parameter D = ∞. In the PSW-R, unlike the PSW-D method, the robots have to broadcast their locations and identifiers. Despite this, the communication mechanism used is very simple and does not need complex protocols. Moreover, as has been proved during the experiments (see Section 6), the A parameter shows a very stable behavior over all tested configurations. To fit the A value to the specific characteristics of the tasks and the environment, the robots can execute Algorithm 3, where α is the learning factor with a value between 0 and 1. As can be seen, when a robot is idle and receives a message from the central agent that a task has not been executed before its deadline, the new area value Ai+1 decreases by α. Thus, the likelihood of selecting a task will be greater in the future. If the robot is executing a task and detects an interference with another robot, the area A is increased, and the probability of executing a new task decreases. When a task finishes before its deadline, the central agent also notifies this event to all the other robots. The way a robot detects an interference depends on the task’s characteristics. For example, in a foraging mission, if a robot has not got its assigned object yet but the central agent notifies that this object has already been gathered, then this robot will know that another robot has been assigned to the same task, and therefore it will detect the interference. Algorithm 3 Algorithm to fit the A value 1: if idle robot and deadline not meet then 2: Ai+1 ← (1 − α)Ai 3: else 4: if interference detected then 5: Ai+1 ← (1 + α)Ai 6: else 7: Ai+1 ← Ai 8: end if 9: end if 5 Auction Task Allocation Methods As an adequate baseline to compare with, three recent and commonly used auction methods have been chosen. Thus, the pseudo-random swarmlike approach here presented is compared to some of the most representative state-of-the-art methods: SBPA, EDFA and SUA. In addition, it is worth mentioning that EDFA provides a quite simple method to deal with deadlines (EDF), J Intell Robot Syst (2012) 68:3–19 SUA is a very simple algorithm and SPA is a very well known auction method and it provides a 2-competitive solution. As a matter of fact, there exist other auction methods, like ConsensusBased Bundle Algorithm (CBBA) [6] that also provide 2-competitive solutions, but they are not so widely-used. Because of the main goal of this study is not to propose new auction approaches but to compare its results with the most wellknown swarm methods, so we have tested neither CBBA nor more complex auction algorithms. 9 Algorithm 4 SBPA algorithm Require: T=List of non assigned tasks 1: for all t ∈ T do 2: Ask for a bid from all idle robots 3: end for 4: repeat 5: Select the best robot-task pair (best bid) 6: Send an award message to the selected robot 7: Remove the task and the robot from the list 8: until There are no more unassigned robots or tasks 5.1 Sequential Best Pair Auction The most complex auction mechanism implemented to compare its performance to the swarm algorithms is the Sequential Best Pair Auction method (SBPA). SBPA is based on the classical best pair selection approach and is very similar to the selection method used in the renowned Broadcast of Local Eligibility (BLE) [24] and studied in detail in [4]. Each time a new task appears in the environment or when a robot finishes its execution, a central auctioneer starts a new auction round. The process, followed by the auctioneer, can be seen in Algorithm 4. Firstly, it requests a bid for each task from all the idle robots (lines 1–3). The idle robots bid using their expected time to finish the task, providing they are able to execute the task before its deadline. Each robot uses its kinematic characteristics to know how long it will take to finish a task. Then the auctioneer selects, in each iteration, the robot-task pair with the shortest execution time and notifies this choice to the robot. As other auction methods, this one needs a lot of communication and computational requirements but, as it will be shown later, it gives the best results in the present study. The detailed analysis of the algorithmic complexity of the SBPA algorithm is as follows: let m be the number of tasks (objects) and n the number of robots, then the complexity of the loop in lines 1–3 is O(m). To find the best robot-task pair (line 5) must be tested each robot and each task, but in each iteration a robot and a task are removed, therefore the complexity of min(m,n) (m − i)(n − i)). Thus, the lines 1–8 is O( i=0 the total algorithmic complexity of the SBPA min(m,n) (m − i)(n − i)) ⊂ algorithm is O(m + i=0 min(m,n) (m − i)(n − i)). As can also be O( i=0 seen, in each iteration the auctioneer sends an award message to a robot, therefore the cost of the communication system is equal to O(n + min(n, m)) ⊂ O(n). 5.2 Earliest Deadline First Auction The Earliest Deadline First (EDF) is a very wellknown and widely-used method for both processors scheduling in real time scenarios and, more recently, for addressing the MRTA problem [7, 12, 21]. In these cases, the tasks (processes) are sorted by deadline in such a way that the tasks with the nearest deadline are processed first. The same concept has been used in the present paper to implement the Earliest Deadline First Auction (EDFA) strategy, as can be seen in Algorithm 5. When a new task appears or when a robot becomes idle, the central auctioneer orders all the available tasks by deadline, and sends a request to all the robots for the first task, the one with the earliest deadline. Then, the robots that can finish the task before the deadline bid for it using its expected execution time. Finally, the central auctioneer selects the best robot (the robot who can finish the task first). If there are more tasks, the task with the next earliest deadline is selected and the process starts again. Following the same reasoning as for the SBPA, the analysis of the EDFA complexity is as follows: let m be the number of tasks and n the number 10 Algorithm 5 EDFA algorithm for the ST approach 1: sort the list of tasks T by deadline (EDF) 2: for all task t in T do 3: Ask for a bid from all idle robots 4: Select the best bid 5: Send an award message to the assigned robot 6: end for of robots, then, the complexity for the sorting algorithm (line 1) using the merge sort or the binary tree sort method is O(mlog(m)). Then, the auctioneer must check each robot’s bid to get the best robot for a task, but we have to take into account that the robots already assigned to earlier objects do not need to be considered. Thus, in each iteration a robot is assigned to a task and the complexity will be equal to: O(mlog(m) + min(n,m) (n − i)). The cost of the communication i=0 is similar to that already explained for SBPA algorithm. 5.3 Serial Unordered Auction This is the simplest auction strategy implemented in this paper. As the tasks reach the central auctioneer, it starts a new auction round for each one of them. That is, the tasks are processed in a sequential way, such as a First in First Out (FIFO) cue. When an auction round is started, each robot bids for this task using its expected execution time, providing that the robot is able to finish the task before the deadline. Then, the robots send their bids to the auctioneer who selects the robot with the shortest execution time as the auction winner. Finally, if there are more tasks, a new auction round is started again until all the objects have been processed. Thus, the central auctioneer does not have to make any decision about the order in which the task must be offered to the robots. The computational complexity analysis of this algorithm is as follows: let’s m be the number of tasks and n the number of robots, then the auctioneer for each task has to check all the bids, that in the worst case will be equal to the number of robots. J Intell Robot Syst (2012) 68:3–19 Following the same reasoning as with the EDFA algorithm, we can see that the complexity min(n,m) of this algorithm is O( i=0 (n − i))), where m is the number of tasks and n the number of robots. The communication complexity is the same as with the EDFA strategy. 6 Experimental Results In this section we will explain the experiments, both with simulators and real robots, carried out to validate our approaches. We will see how the PSW strategy improves the system performance compared to any other swarm method. Two simulators have been used to execute the experiments: RoboSim and Player/Stage, the former for simulation with a large number of robots and the latter for more accurate experiments but with fewer robots and number of tasks. Most of the experiments have also been partially executed with up to four real robots (Pioneer 3DX). 6.1 Experiment Design: RobSim Simulator The RobSim simulator has been developed in our university to execute most of the experiments related to this research project. RobSim is a simulator that allows us to emulate the behavior of a very large population of robots and a huge number of tasks. After each time period the simulator updates the robots’ positions and processes all the events that have happened in that period: new object in the environment, task executed successfully, expired task’s deadline, etc. To speed up the simulation time, we assume that the robot can not collide with any other objects in the environment and therefore, an obstacle avoidance algorithm is not needed. Thus, RobSim can be considered as a non-realistic simulator but it is very a useful tool to compare the global performance of the different tested strategies. The foraging task, explained in Section 3 has been executed on RoboSim. New objects to pick up appear in the environment following a poisson process with parameter λ, where λ is the average number of new tasks that will appear in the environment per time unit. This paper shows the experiments’ results with three different values of J Intell Robot Syst (2012) 68:3–19 λ: 0.05, 0.1 and 0.3. For higher λ values, also tested but not showed in this paper, the number of tasks was so high that the system reaches a saturation like state and all methods produced similar results. – – – – 1st configuration (Tasks without deadline): there is no deadline assigned to any task. 2nd configuration (Uniform Tasks): all tasks have the same deadline time, equal to 250 time units. 3rd configuration (Random Deadline): the tasks’ deadlines are generated according to a uniform distribution between 100 and 500 time units. 4th configuration (Hybrid Tasks): similar to the Random Deadline configuration but with a probability of 0.2, the new task has not an associated deadline, that is, it has no time restrictions. Therefore, tasks with and without deadlines coexist in the same environment. All robots in the colony had the same characteristics and they moved in a 160 × 160 m environment. The number of robots varied between 2 and 40. The number of finished tasks is not significantly increased with a greater number of robots, because of from 40 robots the system reaches a saturation like state. Similar saturation situations for foraging tasks have also been described by other authors such as Rosenfeld et al. [22]. Nevertheless, the complexity analysis of the explained systems (see Section 4) proves that the communication complexity increases linearly with the number of robots. 1,920 simulations were executed, each one lasting 10,000 time units throughout which 270,000 objects were processed. 6.2 Tasks Without Deadline In this section we will explain the results of the experiments carried out without deadlines associated to the task (1st configuration). Experiments with auction and swarm algorithms have been conducted with the RoboSim simulator in order to validate the pseudo-swarm approaches in a nonreal time scenario. Figure 2 shows the results of the following swarm algorithms: NFS with D = ∞, RTH with θ = 0.2 and PSW-R with A = 10 m. The tasks’ 11 arrival ratio, λ, is equal to 0.1. All the algorithms’ parameters have been fitted to obtain the best results. The vertical bars represent the maximum and the minimum value obtained in each set of experiments. These figures do not show the PSWD results because they are similar to PSW-R, in fact, in all cases PSW-R increases between 2% and 8% the number of finished tasks. As can be seen in Fig. 2a, the PSW-R is the swarm-like method with most finished tasks, especially when the number of robots is lower than 20. Furthermore, the number of tasks finished in the RTH experiments varies much more than the PSW-R do, that is, the maximum and the minimum PSWR’s results are very similar. Regarding the total robots’ path length, showed in Fig. 2b, the RTH approach outperforms the others swarm methods, particularly when the number of robots is high. PSW-R’s parameter A is very stable, its best value has been equal to 10 for all tested configurations. Contrarily, the θ parameter of the RTH shows a very different optimum value depending on the configuration or the λ value. Thus, we can state that the PSW algorithm increases the number of finished tasks compared to the classical swarm methodologies but increases the total path length. However, the PSW parameters are much more stable than the classical swarm methods’ parameters, thus the system’s designer does not have to modify these parameters when the environment conditions, such as configuration or λ value, are changed. The experiments conducted with other λ provide similar results and, therefore have not been presented in this paper. The results of the auction algorithm experiments SBPA and SUA with λ = 0.1 are shown in Fig. 3. The EDFA strategy has not been tested because without deadlines, its results are equal to the SUA method ones. Figure 3a shows the total number of objects finished and Fig. 3b the total length covered by all the robots. As demonstrated, when the SUA strategy is used, the number of tasks finished is increased in a quasi linear way with regard to the number of robots and in all cases SBPA outperforms these results. Comparing the auction strategies results (see Fig. 3) to the swarm methods (see Fig. 2), we see that the best auction strategy (SBPA) only increases the number of finished tasks when the number of 12 1200 1000 Finished tasks Fig. 2 Results without deadlines using the following swarm methods: NFS, PSW-R with A = 10 and RTH with θ = 0, 2. λ = 0.1 J Intell Robot Syst (2012) 68:3–19 800 600 400 NFS PSW−R RTH 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots 4 8 x 10 NFS PSW−R RTH Total length (m) 7 6 5 4 3 2 1 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots robots is greater than 20 compared to the PSW-R approach. Even in this case, the difference between the two strategies is not significant. However, the total traveled path with swarm strategies is, in general, greater than the path obtained with auctions. Thus, it can be stated that the new pseudo-random methods can complete similar number of tasks but increases the robots’ path length compared to the most complex auction method. 6.3 Tasks with Deadline This section presents the most remarkable experiments conducted with tasks with deadlines, that is, in the 2nd configuration, 3rd and 4th. As in the last section, all the experiments have been carried out with the RoboSim simulator, and the algorithms’ parameters were set to achieve the best results. Figure 4 shows the number of objects finished before their deadline with the 2nd configuration and λ = 0.1. The total length traveled by robots has not been included because the results are very similar to those explained without time restrictions. The swarm algorithm used were: PSWD with θ = 0.05, PSW-R with A = 10 and RTH with θ = 0.05. The best auction method, SBPA, has also been added to the figure to compare the results to those of the swarm approaches. As demonstrated, the SBPA algorithm presents the best results and, compared to those of the swarm approaches, the difference is greater than those obtained without time restrictions. Besides, the PSW-D and PSW-R outperform, in almost all J Intell Robot Syst (2012) 68:3–19 1200 1000 Finished tasks Fig. 3 Auction methods’ results (SUA and SBPA) with λ = 0.1 with a configuration without deadlines (1st configuration) 13 800 600 400 SUA SBPA 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots 4 x 10 8 Total length (m) 7 SUA SBPA 6 5 4 3 2 1 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots 1200 Finished tasks before its deadline Fig. 4 Number of finished tasks before the deadline executing SBPA, PSW-D with θ = 0.05, PSW-R with A = 10 y RTH with θ = 0.05. The 2nd configuration has been used with λ = 0.1 1000 800 600 SBPA PSW−R PSW−D RTH 400 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots 14 500 Finished tasks before its deadline Fig. 5 Tasks finished before the deadline executing PSW-R with A = 10 and PSW-D with θ = 0.025. The 2nd configuration has been used with λ = 0.05 J Intell Robot Syst (2012) 68:3–19 450 400 350 300 250 200 PSW−R PSW−D 150 100 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots cases, the classical RTH results. The difference between PSW-D and PSW-R is more remarkable when the λ value is low. For example, Fig. 5 presents the results of PSW-D and PSW-R with λ = 0.05. Here, the PSW-R increases 8% on average the number of tasks finished before the deadline compared to PSW-D. Figure 6 shows the same results with λ = 0.3 and the 3rd configuration together with the RTH method. In this case, the PSW methods provide better results than classical RTH do and, in all cases, PSW-R outperforms PSW-D. Note that the best RTH’s and PSW-D’s parameters depend on the λ value, for example the best RTH’s θ parameter when λ = 0.1 is equal to 0.05, but when λ = 0.1 the RTH’s best results are obtained if θ = 0.1. A similar situation is produced with the PSW-D algorithm, but the best A value of the PSW-D algorithm is always equal to 1200 Finished tasks before its deadline Fig. 6 Number of tasks finished before its deadline executing PSW-R with A = 10, PSW-D with θ = 0.033 and RTH with θ = 0.1. The 3rd configuration has been used with λ = 0.3 10m, regardless of the kind of configuration, the λ value, or the deadline value. This stability is due to the very simple communication mechanism implemented by this algorithm. To analyze in greater depth the PSW-R algorithm, Fig. 7 shows the tasks finished before the deadline using this algorithm for different values of the A parameter. In these experiments the 3rd configuration and λ = 0.1 have been used . Similar results have been obtained with other λ values. The worst results are obtained when A = ∞, that is, when all the robots are taken into account to make the final decision. Therefore, good communication capabilities do not necessarily mean better results. Besides, when the A value is very low (A = 0.1 m), the number of finished tasks decreases compared to other A values. As stated, this paper proposes Algorithm 3 to learn the most 1000 800 600 400 PSW−R PSW−D RTH 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots J Intell Robot Syst (2012) 68:3–19 900 Finished tasks before its deadline Fig. 7 Number of tasks that fulfil its deadline using PSW-R, the 3rd configuration and λ = 0.1. Different values of area A has been tested 15 A=0.1 A=1 A=4 A=10 A=30 A=∞ 800 700 600 500 400 300 200 100 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots Fig. 8 Number of tasks that fulfil its deadline with PSW-R algorithm, the 3rd configuration and λ = 0.1. The algorithms have been tested with and without using the learning algorithm and with different A initial values. The learning factor α = 0.1 Finished finished before its deadline suitable A value. Figure 8 shows the number of finished tasks using this learning algorithm with the 3rd configuration and λ = 0.1, that is, the same configuration as in Fig. 7. The simulations used different A initial values (A = 0.1 and A = 1, 000) with a learning ratio α equals to 0.1. Although the initial A values were very far from the optimal (A = 10 m), Algorithm 3 outperforms in all cases the results compared to a system with a constat A value. These results prove that, as some response threshold learning algorithms fit the θ parameter [3, 25], the A parameter of the PSWR method can also be learned. Finally, the number of tasks finished before each deadline with the auction algorithms are showed in Fig. 9. In these experiments λ is equal to 0.3 and two different configurations have been tested: 3rd (random deadlines) and 4th configuration (hybrid Tasks). With both configurations, the EDFA algorithm outperforms SUA results, because the EDFA explicitly manages the tasks deadline time. Moreover, all auction strategies outperform the NFS’s number of finished tasks when the number of robots is high, from eight robots in the 3rd configuration and from 22 robots in the 4th one. The next section explores what happens when the number of robots is low. 6.4 Physical Interference Impact In this section we will analyze in more detail the auction algorithms: SBPA, EDFA and SUA, as well as the NFS approach when the number of robots is low. The NFS’s D parameter, has also been used for the three auction methods, in such a 800 700 600 500 400 Learning A=0.1 Initial value Without learning A=0.1 Fixed value Learning A=1000 Initial value Without learning A=1000 Fixed Value 300 200 100 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots 16 J Intell Robot Syst (2012) 68:3–19 Fig. 9 Deadlines fulfillments with λ = 0.3 and different configurations Finished tasks before its deadline 1200 SUA EDFA SBPA NFS 1000 800 600 400 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots Finished tasks before its deadline 1200 SUA EDFA SBPA NFS 1000 800 600 400 200 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Num. Robots way that an object (task) can only be assigned to a robot if the distance between them is shorter than D. We will use the very realistic and well-known simulator Player/Stage, in such a way that the code written for this simulator can be executed on real robots without any changes. Thus, we can study the impact of the physical interference between robots, which is produced when two or more wish to access the same point simultaneously. The au- Table 1 Number of tasks that do not meet the deadline with the SBPA strategy, and in brackets the increase percentage of tasks when swarm is used D=9 D = 12 D=∞ M=5 M = 10 M=5 M = 10 M=5 M = 10 thors’ previous work [13] addressed this problem when several robots can be assigned to the same task. Although some results of this section have already been included in [14], they are also very important in the present paper to fully compare the auction and swarm algorithms behavior. To execute the experiments we used several Pioneer 3DX robots. The dimensions of the environment is 18 × 18 m and the maximum R=4 R=8 R = 12 67 (26%) 90 (17%) 69 (39%) 90 (29%) 78 (32%) 87 (33%) 38 (38%) 48 (46%) 35 (53%) 56 (47%) 45 (46%) 59 (46%) 18 (60%) 48 (44%) 24 (63%) 56 (47%) 24 (61%) 68 (23%) J Intell Robot Syst (2012) 68:3–19 Table 2 Number of tasks that do not meet the deadline with the SUA strategy, and in brackets the increase percentage of tasks when swarm is used D=9 D = 12 D=∞ 17 M=5 M = 10 M=5 M = 10 M=5 M = 10 robots’ velocity is 0.25 m/s. The robots use this information to calculate the expected execution time and to decide if they can execute a task before the deadline. Two-hundred objects were randomly placed in each experiment, each with a deadline time uniformly distributed between 12 and 70 s. When an object is gathered, another one immediately appears in a random position, thus, the number of objects in the environment, M, is always the same. M is a parameter and its influence on the performance of the system is tested in the experiments. Although the Poisson tasks arrival is more realistic, it has not been used here because other authors, such as [16], use the M parameter too. Note that these experiments do not include PSW approaches because when there are a few number of robots and few tasks (5–10) the likelihood of two robots selecting the same task is very low too. Therefore, under this conditions the physical effect has grater impact than the interference between tasks. Thus, we have studied two of the most important harmful effects: physical interference, and task interference that happens when two robots select the same task to execute. Table 1 shows the number of tasks that do not meet the deadline using the SBPA strategy, where R is the number of robots and with the percentage increase of tasks when the NFS strategy is used given in brackets. In all cases, this auction algorithm is better than the swarm system, especially when the number of robots is increased. For exTable 3 Number of tasks that do not meet the deadline with the EDFA strategy, and in brackets the increase percentage of tasks when swarm is used D=9 D = 12 D=∞ M=5 M = 10 M=5 M = 10 M=5 M = 10 R=4 R=8 R = 12 105 (−17%) 128 (−17%) 102 (11%) 128 (−2%) 98 (14%) 122 (6%) 94 (−54%) 93 (−4%) 85 (−13%) 94 (11%) 84 (0%) 100 (9%) 98 (−118%) 96 (−13%) 89 (−39%) 89 (15%) 90 (−48%) 88 (−13%) ample, with 12 robots this improvement can be around 60%. Furthermore, the maximum distance (D) does not have a great impact on the results, by contrast, Kalra and Martinoli [16] showed that without a deadline this parameter can be very important. Tables 2 and 3 show the number of tasks that do not meet the deadline using SUA and EDFA, respectively. The percentage increase of tasks when the NFS strategy is used is given in brackets. In all cases the SBPA is better than both SUA and EDFA and, even in most cases, the swarm outperforms the SUA and EDFA results. The performance of the SUA/EDFA systems improves as the ratio between the maximum distance and the R ) increases. When there are number of robots ( D a lot of robots in relation to the number of tasks, for example if D = 9 and R = 4, the SUA/EDFA results are 17% worse than swarm, but with low R values of D the results are SUA/EDFA 14% better than swarm. Moreover, the EDFA seems to improve, in most cases the system performance, especially for the worst SUA cases. We have to note that these results are similar in part to those shown in Fig. 9, where for a low number of robots the swarm strategy could outperform both SUA and EDFA. In the RoboSim experiments the disR tance D was infinity and therefore the D ratio was the minimum possible. To finish the experimental phase, the previously described methods have been executed using real robots with the same source code run on R=4 R=8 R = 12 99 (−10%) 135 (−24%) 97 (15%) 126 (0%) 96 (16%) 132 (−2%) 87 (−43%) 87 (2%) 81 (−8%) 97 (15%) 94 (−12%) 94 (8%) 78 (−73%) 90 (−6%) 91 (−42%) 100 (−5%) 85 (−39%) 96 (−9%) 18 J Intell Robot Syst (2012) 68:3–19 Fig. 10 Images of the experimenst with real robots the simulators. In particular, four Pioneer 3DX, shown in Fig. 10a, have been used. Each vehicle, whose localization is calculated by odometry, is equipped with a ring of 16 regularly spaced sonars and has a maximum speed of 0.25 m/s. The robots are endowed with a Via Epia mother board with an Eden 600MHz processor, 512MB of RAM and a wireless card for the communications. To accomplish the mission the robots have to collect several objects randomly placed in a 10 m long by 5 m wide workspace. Figure 10b shows the environment and the initial location of all the robots. The marks O2 and O1 on the floor, are some objects to gather and the D label, in the center of the image, is the delivery point. The system behavior during the experiment with real robots has been, in all cases, just as expected and in accordance with the described methods. Thus, it is proved that the described methods can be implemented on real robots without highly demanding requirements. a very simple communication mechanism and a learning algorithm, the main PSW-R’s parameter, A, can be easily fitted and is extremely stable. Three different auction mechanisms have also been tested and their performance have been compared to the PSW algorithms. The results show that the number of finished tasks of the pseudo-swarm approaches is quite near to that of some auction approaches, which need a high level of communication and a lot of computational resources. Thus, this paper extends the work of other authors, such as [16], taking into account tasks with deadlines and our previous work [14] with the new pseudo-random swarm-like methods. The work presented here has some challenging aspects to add and to improve. We are working to use the PSW for other kinds of tasks, such as exploration, cleaning, etc. We also are working on implementing all the experiments on real robots and allowing several tasks to be assigned to each robot. 7 Conclusions and Future Work This papers analyzes the behavior of swarmlike algorithms in real time scenarios, where the tasks must be executed before a deadline. The main contribution of this paper is a new swarmlike method called PSW that, for the first time, combines concepts from deterministic and nondeterministic response threshold algorithms. Two PSW versions have been implemented, PSW-R and PSW-D, and both increase the number of tasks finished before the deadline compared to classical swarm approaches. Moreover, thanks to References 1. Agassounon, W., Martinoli, A.: Efficiency and robustness of threshold-based distributed allocation algorithms in multi-agent systems. In: 1st Int. Joint Conference on Autonomous Agents and Multi-agents Systems, pp. 1090–1097. Bologna, Italy (2002) 2. Altshuler, Y., Bruckstein, A., Wagner, I.: On swarm optimality in dynamic and symmetric environments. Second International Conference on Informatics in Control, Automation and Robotics (ICINCO) (2005) 3. Bonabeau, E., Sobkowski, A., Theraulaz, G., Deneubourg, J.L.: Adaptive task allocation inspired by J Intell Robot Syst (2012) 68:3–19 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. a model of division of labor in social insects. In: Lundh, D., Olsson, B., Narayanan, A. (eds.) Bio Computation and Emergent Computing, pp. 36–45. World Scientific (1997) Brian P. Gerkey, M.M.: A formal analysis and taxonomy of task allocation in multi-robot systems. Int. J. Rob. Res. 23(9), 939–954 (2004) Campbell, A., Wu, A., Shumaker, R.: Multi-agent task allocation: learning when to say no. In: 10th Annual Conference on Genetic and Evolutionary Computation, pp. 201–208. Atlanta, USA (2008) Choi, H.L., Brunet, L., How, J.: Consensus-based decentralized auctions for robust task allocation. IEEE Trans. on Robotics 25(4), 912–926 (2009). doi:10.1109/ TRO.2009.2022423 Daichi Kato, K.S., Fukuda, T.: Risk management system based on uncertainty estimation by multi-robot. J. Robot. Mechatronics 20(4), 456–466 (2010) de Oliveira, D., Ferreira, P.R., Bazzan, A.L.: A swarm based approach for task allocation in dynamic agents organizations. In: 3th International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 3, pp. 1252–1253. Nueva York, USA (2004) del Acebo, E., de-la Rosa, J.L.: Introducing bar systems: a class of swarm intelligence optimization algorithms. In: AISB Convention Communication, Interaction and Social Intelligence, pp. 18–23. Aberdeen, Scotland (2008) Dias, M.B., Stentz, A.: Traderbots: a market-based approach for resource, role, and task allocation in multirobot coordination. Tech. Rep. CMU-RI-TR-03-19, Carnegie Mellon University, Pittsburgh, USA (2003) Gerkey, B.P., Mataric, M.: Sold!: auction methods for multi-robot coordination. IEEE Trans. Robot. Autom. Special Issue on Multi-robot Systems 18(5), 758–768 (2002) Ghiasvand, O.A., Sharbafi, M.A.: Using earliest deadline first algorithms for coalition formation in dynamic time-critical environment. Education and Information Tech. 1(2), 120–125 (2011) Guerrero, J., Oliver, G.: A multi-robot auction method to allocate tasks with deadlines. In: 7th IFAC Symposium on Intelligent Autonomous Vehicles. Lecce, Italy (2010) Guerrero, J., Oliver, G.: Multi-Robot Systems, Trends and Development, chap. Auction and Swarm MultiRobot Task Allocation Algorithms in Real Time Scenarios, pp. 437–456. InTech (2011) Jones, E.G., Dias, M., Stentz, A.: Learning-enhanced market-based task allocation for disaster response. 19 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. Tech. Rep. CMU-RI-TR-06-48, Carnegie Mellon University, Pittsburgh, USA (2006) Kalra, N., Martinoli, A.: A comparative study of market-based and threshold-based task allocation. In: 8th International Symposium on Distributed Autonomous Robotic Systems, pp. 91–102. Minneapolis, USA (2006) Koes, M., Nourbakhsh, I., Sycara, K.: Heterogeneous multirobot coordination with spatial and temporal constraints. In: 20th National Conference on Artificial Intelligence (AAAI), pp. 1292–1297. Boston, USA (2005) Lemaire, T., Alami, R., Lacroix, S.: A distributed tasks allocation scheme in multi-uav context. In: International Conference on Robotics and Automation (ICRA), vol. 4, pp. 3622–3627. New Orleans, USA (2004) Liu, W., Winfield, A., Sa, J., Chen, J., Dou, L.: Strategies for energy optimisation in swarm of foraging robots. Lect. Notes Comput. Sci. 4433, 14–26 (2007) Melvin, J., Keskinocak, P., Koenig, S., Tovey, C., Ozkaya, B.Y.: Multi-robot routing with rewards and disjoint time windows. In: International Conference on Intelligent Robots and Systems (IROS), pp. 2332–2337. San Diego, USA (2007) Ramchurn, S.D., Polukarov, M., Farinelli, A., Truong, C.: Coalition formation with spatial and temporal constraints. In: International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 1181– 1188. Toronto, Canada (2010) Rosenfeld, A., Kaminka, G.A., Kraus, S.: Coordination of Large Scale Multiagent Systems, chap. A Study of Scalability Properties in Robotic Teams, pp. 27–51. Springer-Verlag (2006) Smith, S.L., Bullo, F.: The dynamic team forming problem: throughput and delay for unbiased policies. Syst. Control. Lett. 58, 709–715 (2009) Werger, B.B., Mataric, M.J.: Broadcast of local eligibility for multi-target observation. In: 5th International Symposium on Distributed Autonomous Robotic Systems, pp. 347–356. Knoxville, USA (2000) Yang, Y., Zhou, C., Tin, Y.: Swarm robots task allocation based on response threshold model. In: 4th International Conference on Autonomous Robots and Agents, pp. 171–176. Willengton, New Zealand (2009) Yu, L., Cai, Z.: Robot exploration mission planning based on heterogeneous interactive cultural hybrid algorithm. In: 5th International Conference on Natural Computation, pp. 583–587. Tianjin, China (2009)