Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003
…
6 pages
1 file
Este artigo aborda o problema de planejamento de trajetórias e controle de grupos de robôs móveis executando tarefas cooperativas. O foco do artigo consiste em mostrar como transformar diversas tarefas robóticas cooperativas em umaúnica tarefa básica e com isso propiciar que algoritmos desenvolvidos para uma tarefa possam ser utilizados na execução de outras. Resultados experimentais com um grupo de robôs móveis equipados com cameras omnidirecionais comoúnico sensor são apresentados e discutidos.
Motion Planning for Dynamic Agents [Working Title]
Recent advances in robotics, autonomous systems, and artificial intelligence (AI) enable robots to perform complex tasks such as delivery, surveillance, inspection, rescue, and others. However, they are unable to complete certain tasks independently due to specific restrictions. In the last few years, researchers are keenly interested in deploying multi-robots for such tasks due to their scalability, robustness, and efficiency. Multiple robots; mobile robots, unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and unmanned underwater vehicles (UUVs); are gaining much momentum and versatility in their operations, whereas cooperative motion planning is a crucial aspect of incorporating these robots into boundless applications. The purpose of this review chapter is to present an insightful look into the problem of cooperative motion planning with its solution and a comprehensive assessment of various path-planning techniques, task-based motion planning techniques, and obs...
International Journal of Computational Vision and Robotics, 2018
Development of a navigation scheme for a group of cooperative mobile robots is attempted in this paper. Potential field method is used to generate collision-free movement of a robot while it encounters other robots as obstacles. Finally, a cooperation scheme is proposed based on human behaviour to optimise motion strategies of robots treating each other as their obstacles. Computer simulations have been conducted for three different cases with four, eight and 12 robots negotiating a common dynamic environment. In each case, 100 different scenarios are considered and travelling times of all the robots are computed separately. The scenes in which the robots meeting collision is treated as a failure scenario and travelling time is not calculated. Performance of cooperation scheme has improved with the increase in a number of robots.
This thesis focuses on the development of a system in which a team of heterogeneous mobile robots can cooperate to perform a wide range of tasks. In order that a group of heterogeneous robots can cooperate among them, one of the most important parts to develop is the creation of an architecture which gives support for the cooperation. This architecture is developed by means of embedding agents and interfacing agent code with native low-level code. It also addresses the implementation of resource sharing among the whole group of robots, that is, the robots can borrow capabilities from each-other.
Robotics and Autonomous Systems, 2008
In this paper we address the problem of planning the motion of a team of cooperating mobile robots subject to constraints on relative configuration imposed by the nature of the task they are executing. We model constraints between robots using a graph where each edge is associated with the interaction between two robots and describes a constraint on relative configurations. We develop a decentralized motion control system that leads each robot to their individual goals while maintaining the constraints specified on the graph. We present experimental results with groups of holonomic and non-holonomic mobile robots.
IEEE Transactions on Robotics and Automation, 2003
In this paper, we propose a motion-planning method of multiple mobile robots for cooperative transportation of a large object in a three-dimensional environment. This task has various kinds of problems, such as obstacle avoidance and stable manipulation. All of these problems cannot be solved at once, since it would result in a dramatic increase of the computational time. Accordingly, we divided the motion planner into a global path planner and a local manipulation planner, designed them, and integrated them. The aim was to integrate a gross motion planner and a fine motion planner. Concerning the global path planner, we reduced the dimensions of the configuration space (C-space) using the feature of transportation by mobile robots. We used the potential field to find the solution by searching in this smaller-dimension reconstructed C-space. In the global path planner, the constraints of the object manipulation are considered as the cost function and the heuristic function in the search. For the local manipulation planner, we developed a manipulation technique, which is suitable for mobile robots by position control. We computed the conditions in which the object becomes unstable during manipulation and generated each robot's motion, considering the robots' motion errors and indefinite factors from the planning stage. We verified the effectiveness of our proposed motion planning method through simulations.
2003
We address the problem of planning the motion of a team of mobile robots subject to constraints imposed by sensors and the communication network. Our goal is to develop a decentralized motion control system that leads each robot to their individual goals while keeping connectivity with the neighbors. We present experimental results with a group of car-like robots equipped with omnidirectional vision systems.
Lecture Notes in Computer Science, 2005
The coordinated motion of a group of autonomous mobile robots performing a coordinated task has been of high interest in the last decade. Previous research has shown that one of the main problems in the area is to avoid collisions of the robots with obstacles and other members of the group. In this work, we develop a novel coordination scheme along with a new online collision avoidance algorithm. In the proposed algorithm, reference trajectories for a group of autonomous mobile robots are generated in terms of linear and angular velocities of the robots. Several coordinated tasks have been presented and the results are verified by simulations.
Coordinadora del programa formativo: Maria Carrascal Rueda emana.net Bloque I. ¿De qué va este libro? 5 Bloque I. Índice ¿Qué es el Coaching de Equipos? ¿Cómo lo distinguimos de….? Cuando habláis de sistema, ¿a qué os referís? ¿Cuándo es útil el Coaching de Equipos? ¿Qué condiciones tiene que cumplir un equipo para realizar un proceso de coaching? 1. 2. 3. 4. 5. 6 Bloque I. ¿Qué es el Coaching de Equipos?
The continuous advances in the technology have fostered the utilization of several robotic agents in order to carry out a large variety of cooperative tasks. While one could design a robot for every imaginable robotic task, it is clearly more productive and versatile to have a team of cooperative mobile robots that can be assigned to perform different tasks, including the possibility of simultaneously executing more than one task. Following this idea, we are interested in using groups of unmodified mobile robots in order to execute various cooperative tasks. In other words, we are not interested in engineering the problem by changing the robots' hardware to satisfy the requirements of the task, but engineering the solution by developing robust and reliable software that takes into account robots' and tasks' constraints. Thus, the problem may be posed as: Given a team of mobile robots and a set of tasks to be performed, generate a solution that includes the subgroup of robots to be used during the task and a specification of the action to be taken by each robot. More specifically we want to consider the execution of all tasks as a multi-robot motion planning problem and develop decentralized algorithms to control each robot.
In this paper we present our progress towards the development of a generalized framework for controlling, planning and coordinating the motion of cooperative robots. The underlying idea of the framework is to reduce several cooperative tasks to the same basic problem and consequently to enable the use of a single solution for all of them. From a practical standpoint, a framework like this would be important in order to allow a single team of robots to perform several different tasks. It would be very interesting, for example, to provide each robot with a single suite of algorithms parameterized only by the particularities of the problem. Hence, even a totally new and unknown task could be executed by the robots if this particular task would be transformed to one of the tasks the robots are able to execute.
Consider a mobile robot in a horizontal plane. The configuration space, C, is the set of all possible configurations, q = (x, y, θ), for the robot. Thus, any robot can be represented by a point in the configuration space, which dimension is the number of degrees of freedom of the system. This idea is introduced in order to represent physical robots as points and simplify the motion planning problem. In the robot configuration space the j th obstacle maps to a set of configurations, C obs j , in which the robot touches or penetrates the obstacle. The union of C obs j for all objects, C obs , is called configuration space obstacle, and the set F = C − C obs is the free configuration space for the robot. Therefore, finding a collision-free path for a robot corresponds to finding a trajectory (a sequence of configurations) in the configuration space that does not intersect C obs or, in order words, belongs to F (Latombe, 1991).
When we are working with a group of n robots, each robot has its own free space, F i , and the motion planning problem is to find a trajectory for the whole group in the composite space F = F 1 × F 2 × · · · × F n . Notice that the dimension of F is now n times the dimension of the configuration space for a single robot and the problem becomes highly complicated. In order to overcome such a complexity many researchers are pursuing decentralized and distributed methodologies for motion planning as discussed in the next section.
There are several approaches for multi-robot motion planning, control and coordination reported in the literature. Most of these approaches can be categorized as deliberative or reactive, and centralized or decentralized. Several other approaches however, may combine two or more characteristics of the aforementioned characteristics.
Some deliberative approaches for motion planning consider a group of n robots as a single system with n degrees of freedom and use centralized planners to compute paths in the composite configuration space (Aronov et al., 1998). This approach in general guarantees completeness but its complexity is exponential in the dimension of the composite configuration space (Hopcroft et al., 1984). Other groups have pursued decentralized approaches to path planning. This generally involves two steps: (1) individual paths are planned independently for each robot; and (2) the paths are merged or combined such that collisions are avoided. Some authors call these approaches coordinated path planning (LaValle and Hutchinson, 1998;Simeon et al., 2002;Guo and Parker, 2002).
Totally decentralized approaches are in general behaviorbased (Balch and Arkin, 1998). In behavior-based control (Arkin, 1998), several desired behaviors are prescribed for each agent and the final control is derived by weighting the relative relevance of each behavior. In general, each agent has a main behavior that guides it to the goal and secondary, usually reactive, behaviors used in order to avoid obstacles and other robots in the team. These behaviors are generally based on artificial potential fields (Khatib, 1986), such as in (Howard et al., 2002), where this technique was used in order to deploy robots in known environments. The idea of having artificial potential fields in order to have each robot repelling others was also used in (Reif and Wang, 1995).
We are interested in decentralized approaches for motion planning. Although we do not show any motion planning strategy in this paper, our multi-robot motion planning problem defined in the next section is posed as a natural extension of the single robot motion problem and therefore can be solved independently for each robot.
Most cooperative tasks involving mobile robots can be viewed as an extension of the basic multi-robot motion planning problem. This problem may be defined as finding a motion plan for all the robots in a group such that each robot reaches its goal while avoiding collisions with other robots and with obstacles in the environment. We extend this problem by defining the coordinated motion planning problem, where besides avoiding collisions, the robots need to cooperate and maintain formation constraints in order to reach their goals:
Coordinated motion planning problem: Consider a world, W, occupied by a set, R = {R 1 , . . . , R n }, of n robots. The i th robot R i can be represented by a configuration q i in the config-
Formation constraints are constraints on individual robots induced by other robots in the team. Thus, C Ri (R\R i , t) depends on the robots' characteristics, their configurations, and also on the nature of the task. Notice that our problem statement differs from the previous definition of multi-robot motion planing problem in the sense that, besides inter-robot collisions, we are adding other kinds of constraints that can include, for example, sensor field-of-view constraints and communication range constraints. On the other hand, our definition of multi-robot motion planning is not too different from the definition of robot motion planning for single robots (Latombe, 1991). While the traditional definition considers the problem of moving the robot in a limited free space, where the constraints are induced by the (non-controlled) obstacles in the environment, here part of the valid (free) configuration space for the robots is controlled by the position of the other robots.
The problem formulation presented in this section is very generic and can include a large variety of cooperative tasks. However, it is important to note that the key point for transforming a specific cooperative problem into a generic one presented here is the specification of the formation constraints. Formation constraints will be discussed in the next section. Examples of formation constraints for two specific tasks are presented in Section 6.
The physical locations of the robots coupled with the characteristics of the hardware and the requirements of the task dictate the neighborhood relationship for the group of robots. This relationship can be represented by a graph G, where the robots themselves are the vertices and the relationship itself is represented by directed edges or arcs. A graph is represented by a tuple (R, E, G), where R is the set of robots, E ⊆ R × R is the edge set representing the relationship among the robots, and G is the set of constraint functions that describe the conditions under which each edge is maintained. For each element of E there is at least one corresponding element in G. When, for example, the task requires interaction between robots R i and R j , this interaction is represented by the edge e ij = (R i , R j ) ∈ E and a set of functions
In this case we also say that R j is a neighbor of R i . Under the assumption that robots are identical, in general, the graphs are bidirectional and can be considered undirected. In this case,
Also, we consider that there are no loops or, in other words, there are no edges of the form
For each constraint function in G we associate at least one inequality of the form g k (q i , q j ) ≤ 0, which represents a formation constraint. The existence of an edge between R i and R j requires that such inequalities be verified. Thus, the edge between R i and R j is removed from the graph as long as, at least one constraint in G ij , g k (q i , q j ) is strictly greater than zero.
The set of functions G ij can be seen as boundaries for configuration spaces. If g k (q i , q j ) ∈ G ij , 1 ≤ k ≤ m, are linear functions and the robots can be represented by their positions (q i = (x i , y i )), the set G ij is a representation of the polygonal region in the configuration space, γ ij , defined by the intersection of m planes:
where,
In order to better understand the physical meaning of the constraint set G ij suppose R j is static with fixed position q j = (x j , y j ). In this case each g k (q i , q j ) can be written as g k (q i ), a function of q i only. Since g k (q i ) ≤ 0 describes a condition under which the edge e ij is maintained, we can observe that in order to preserve this edge, q i must belong to the set, γ ij , represented by G ij . In this case g k (q i ) ≤ 0, 1 ≤ k ≤ m is always satisfied.
As an example, consider a group of robots communicating using a wireless ad-hoc network. The only task the robots need to perform is to drive towards their goals while maintaining communication with others. The group is represented by its communication graph G c . The existence of a communication link between R i and R j is represented by the edge e ij and a single function g(q i , q j ), which is directed related to the maximum range of their antennas. The communication link between the robots is maintained only if constraint g(q i , q j ) ≤ 0 is satisfied or, in other words, if R i is within the communication range of R j and vice-versa. If we consider identical robots and omnidirectional antennas, we restrict our attention to undirected graphs and represent the constraint function g(q i , q j ) by a circle with center in q j and radius r, representing the communication range. Because we consider an undirected graph, g(q i , q j ) and g(q j , q i ) have the same equation and therefore the second can be thought as another circle with center q i and radius r. In this case G ij and G ji have only one element, which is a nonlinear function. Both sets define the regions γ ij and γ ji for q i and q j respectively, where communication between R i and R j is guaranteed.
The previous formulation defines a configuration space for a robot R i in function of one of its neighbors, R j , where formation constrains are satisfied. The intersection of the configuration spaces induced by all neighbors of R i defines the region Γ i for q i where all neighborhood relationships are maintained:
(1)
Analogously to the problem in Section 4, one can see that the region in the configuration space represented by Γ i and defined by the constraints in G, directly determines the valid configuration space for R i as:
A specific constraint g k (q i , q j ) ≤ 0 is active when g k (q i , q j ) = 0. Then, a constraint activation means that q i is on the border of Γ i and, assuming that g k (q i , q j ) is identical to g k (q j , q i ), q j is on the border of Γ j . Thus, despite the number of constraints and the number of neighboring robots for each robot, when one constraint is active for robot R i another identical constraint is active for one of its neighbors.
Our objective is to control the robots in order to maintain q i inside Γ i . Hence, when a given constraint is active, our control laws must act in order to prevent q i from moving outside Γ i . Thus, in order to account for sensing and actuators dynamics and uncertainty, instead of considering the border of Γ i as the limit for the activation of the controllers, it would be very interesting to consider a constraint to be activated whenever g k (q i , q j ) = δ, where δ is a small negative number.
In this section we present two examples of formation constraints. The first one involves constraints related to inter-robot communication and is quite straightforward. The second example, which is related to manipulation, is a little more involved.
In this example we consider a constrained go-to-goal task. In this task, besides going to independent goals, the robots must maintain communication with their teammates. Notice that, for most communication algorithms, nodes in a communication network can be seen as routers that provide communication between two other nodes, even if those nodes are not neighbors. In this way, considering that each robot is a communication node, in order to guarantee communication in the group, neighborhood relationships need to be specified by a connected graph. We will consider that a generic robot R j induces two constraints in R i . First, we have a hard communication constraint given by g(q i , q j ) ≤ δ 1 , beyond which the connectivity between R j and R i is broken. While δ 1 could be taken to be zero, in order to be robust to sensing errors, it is advisable to set it to a small negative value. The second constraint is a collision constraint g(q i , q j ) ≥ δ 2 . Observe that δ 2 < δ 1 < 0. In Figure 1: In order to maintain communication and avoid collisions, each robot induces two circular constraints to its neighbors. The shaded area γ ij , represents a "safe" region for R i induced by R j .
Figure 1
general g(q i , q jk ) could be any convex differentiable function, but if we assume circular robots with omnidirectional antennas, all elements of G can be represented by circles. Then,
Therefore, each γ ij is a disc with a circular hole. Disk and hole radii are specified by δ 1 and δ 2 respectively. Figure 1 depicts R i constraints induced by R j . Each neighbor of R i induces a similar γ ij . The intersection of these sets determines Γ i .
In this example we consider a manipulation task. The idea is that a group of robots surrounds and "cages" an object and then transports the object while moving toward a goal position. This task can be mapped to our coordinated motion planning problem because for each robot, the task can be seen as the individual robot problem of moving to a specified goal while maintaining a minimal distance to the neighbors in such a way the object cannot scape. Neighborhood relationships, in this case, are specified by a graph where each robot has exactly two neighbors. Depending on the number of robots and size of the group, the minimal distance between the robots could be simply equal to the radius of the larger circle that can be inscribed in the object to be manipulated (considering a planar object). In this case each constraint g k (q i , q j ) ≤ 0 will also be a circumference. Some times, however, the number of robots and their size do not allow the object to be caged using a circular constraint. In these cases, the valid configuration spaces for each robot must be defined by constraints that are dependent on the object's shape and orientation. Figure 2: Four robots caging a object: The interior (shaded gray) represents the closure configuration space, C cls . The dashed polygon represents the object. Notice that the origin of the object's reference frame is inside C cls , a compact set, indicating a caging condition. Our idea for solving this problem is to consider the object as a movable entity and the robots as obstacles for this entity. Then, we need to position the robots in such a way the object is trapped in a closed configuration space. This idea is shown in Figure 2. In this figure four point robots are caging a five-sided object (dashed polygon). The solid polygons are equivalent to the configuration space obstacle (Section 2) for the object. Therefore, for each robot i such a space is called C obj i . Notice that if C obj i of each robot intersects the C obj i of its neighbors, the object is caged. In this situation the origin of the object is always in the closure configuration space, C cls . Thus, for robot R i , we can define γ ij as the region in the configuration space that represents the intersection between C obj i and C obj j as:
Figure 2
Considering that R i−1 and R i+1 are the two neighbors of R i , Γ i is defined as the intersection of γ ij for j ∈ {i − 1, i + 1}:
The sets γ ij and Γ i are illustrated in Figure 3. As shown in this figure, the object is caged if q i ∈ Γ i . The idea for point robots can be easily extended for polygonal robots (Pereira et al., 2002). In both cases the constraints g k (q i , q j ) ≤ 0 that bound Γ i will be linear functions. On the other hand, if circular robots are considered, some constraints are non-linear, since C obj i in this case is not a polygon.
Figure 3
The object is caged if each robot i is inside Γ i . The shaded areas represent (a) γ 21 (= γ 41 ); and (b) Γ 1 .
In this section we show experimental results that are directly related to the examples presented in the previous section. Although the coordinated motion planning problem can be solved in several different ways, our solution (not discussed in this paper) is totally decentralized and is based on the on-line modification of pre-computed navigation functions. In this approach the robots control systems are constituted by a linear composition of reactive and deliberative behaviors. The reactive behaviors are functions of the relative position among the robots and the deliberative ones depend basically on the environment. This solution is discussed in (Pereira et al., 2003). It is important to mention that the same set of algorithms, both for sensing and control, is used in the two experiments shown in this section, without any modification. The input for our algorithms are the graph G, including the set of constraint functions G, a map of the environment and a goal configuration for each robot.
We have validated our methodology in a team of car-like robots equipped with omnidirectional cameras as their only sensors (see Figure 4: Three robots in a manipulation task (left) and an image from one of the omnidirectional cameras (right). Figure 4). Communication among the robots relies on IEEE 802.11b wireless networking. A calibrated overhead camera is used to localize the robots in the environment. Since from this camera we are not able to estimate the robots' orientation, we use communication among the robots to build a complete knowledge of their configurations. Communication is essentially used for sensing algorithms but is not used for control or decision making.
Figure 4
Because of continuous wireless ethernet availability in our lab, communication constraints do not apply in our case. However, the resolution of the omnidirectional cameras used by the robots drastically decreases with distance from objects. At 2 m, for instance, the projection of an observed robot in the image plane is only one pixel in size. Due to this shortcoming, the robots must keep sensing constraints with respect to their neighbors in order to correct localize themselves with respect to each other. Sensing constraints can be modelled in the same way communication constraints, as shown in the previous section (Pereira et al., 2003). Figure 5 shows six snapshots of our first experiment. Three robots must navigate toward their independent goals while maintaining sensing constraints. In this experiment the graph G was specified such that R 1 and R 3 are neighbors of R 2 but not neighbors of each other. Figure 5(a) shows that R 1 was initialized outside the sensing region of R 2 , which was set to be 1.5 m. The next snapshot shows that the robots moved in order to satisfy this constraint. Figure 5(c) shows that R 2 and R 3 moved very close to each other. The activation of the avoidance constraints is then followed by a repulsion (Figure 5(d)).
Figure 5
Three robots moving toward their goals while maintain sensing constraints with at least one other robot. Howard, A., Mataric, M. J. and Sukhame, G. S. (2002). Mobile sensor network deployment using potential fields: A distributed, scalable solution to the area coverage problem, Proceedings of the International Symposium on Distributed Autonomous Robotic Systems, pp. 299-308.
In our second experiment the robots must transport a triangular box from an initial to a goal position. In Figure 6, we show an experimental trial with three robots, R 1 , R 2 , and R 3 , transporting a triangular box toward the goal. The graph G in this case was designed such that all robots are neighbors of each other. Data collected from the overhead camera is shown for a typical experimental run. The dashed polygons representing C obj i for the rectangular robot geometry, was overlaid on the experimental data. The object can be seen to be caged in each of the three snap-shots since the sets C obj i for two neighboring robots always intersect.
Figure 6
The origin of the object (•) is always inside the compact set delimited by the three C obj i , indicating a caging condition. (a) -initial and final configurations; (b) -an intermediate configuration.
We have presented a framework that considers the execution of several cooperative tasks as a constrained motion planning and control problem. The constraints are derived by simple pairwise robot relationships, which makes it possible to decentralize the problem. Our work is moving toward the objective of having a single group of cooperative robots running a finite set of algorithms executing an infinite number of tasks. This may be possible only if each new task can be transformed or mapped to one of the tasks the robots can execute. In this work we have shown examples of two very distinct tasks that can be easily solved by the same algorithm. Future work includes a classification of tasks that can be transformed to each other and more precise definitions for this transformation. We are also interested in decentralized, optimal solutions for the coordinated motion planning problem.
Revue suisse d'histoire, no 72/3, 2021
2017
Theologische Quartalschrift, 2024
Interdisciplinary Research Methods in EU Law , 2024
Japanese journal of mathematics. New series, 1999
Education Sciences , 2024
SERES 2007-IV. ULUSLARARASI KATILIMLI SERAMİK, CAM, EMAYE, SIR VE BOYA SEMİNERİ, Bildiri Kitabı, 2007
CODUL ADMINISTRATIV PREZENT ȘI PERSPECTIVE ÎN SPAȚIUL ADMINISTRATIV ROMÂNESC, 2023
Foreign Affairs, 2015
Annals of The Association of American Geographers, 2007
Annals of the New York Academy of Sciences, 1969
IEEE Transactions on Applied Superconductivity, 2011
Asian Journal of Biomedical and Pharmaceutical Sciences, 2018
WIT Transactions on Ecology and the Environment
Design and fabrication of a dual laser Raman spectrometer with a single one-dimensional CCD detector, 2024