Academia.eduAcademia.edu

Dynamic 2-Connectivity with Backtracking

1998, SIAM Journal on Computing

SIAM J. COMPUT. Vol. 28, No. 1, pp. 10–26 c 1998 Society for Industrial and Applied Mathematics ° DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING∗ JOHANNES A. LA POUTRɆ AND JEFFERY WESTBROOK‡ Abstract. We give algorithms and data structures that maintain the 2-edge and 2-vertexconnected components of a graph under insertions and deletions of edges and vertices, where deletions occur in a backtracking fashion (i.e., deletions undo the insertions in the reverse order). Our algorithms run in Θ(log n) worst-case time per operation and use Θ(n) space, where n is the number of vertices. Using our data structure we can answer queries, which ask whether vertices u and v belong to the same 2-connected component, in Θ(log n) worst-case time. Key words. dynamic graph algorithms, backtracking AMS subject classifications. 68Q20, 68Q25 PII. S0097539794272582 1. Introduction. Dynamic graph problems have been studied extensively in the last several years. Roughly speaking, the research has concentrated on two categories of dynamic graphs, viz., partially dynamic or incremental graphs, which are graphs that grow on line by the insertion of vertices and edges, and fully dynamic graphs, which are subject to arbitrary insertion and deletion of edges and vertices. A number of different problems on incremental and fully dynamic graphs have been studied, including 2- and 3-edge connectivity, 2- and 3-vertex connectivity, spanning trees, and planarity testing [1], [5], [7], [8], [9], [10], [12], [13], [22], [18], [19], [20], [21], [26], [27], [30], [31], [33]. Deterministic algorithms for incremental 2-edge and 2-vertex connectivity running in Θ(α(m, n)) amortized time per operation, where m is the maximum number of edges and n the maximum number of vertices, are described in [18], [33]. Those algorithms require Θ(n) time per operation in the worst√case. A fully dynamic deterministic algorithm for √ 2-edge connectivity running in O( n) time is given by Eppstein et al. [7], and an O( n log n) time algorithm for fully dynamic 2-vertex connectivity is given by Rauch [27]. (Again, m is the maximum number of edges and n the maximum number of vertices.) Thus, there is a substantial gap in deterministic time complexity between the incremental and fully dynamic problems.1 A tantalizing question is whether we can obtain much better time bounds than those for the fully dynamic problems by putting restrictions on the deletions of edges. A natural and useful restriction is to limit ∗ Received by the editors August 8, 1994; accepted for publication (in revised form) October 28, 1996; published electronically June 15, 1998. http://www.siam.org/journals/sicomp/28-1/27258.html † Department of Computer Science, Princeton University, Princeton NJ 08540 and Department of Computer Science, Utrecht University, 3508 TB Utrecht, The Netherlands. At Princeton University, the research was supported by a NATO Science Fellowship awarded by NWO (the Netherlands Organization for Scientific Research) and DIMACS (Center for Discrete Mathematics and Theoretical Computer Science - NSF-STC88-09648). At Utrecht University, the research of the author has been made possible by a fellowship of the Royal Netherlands Academy of Sciences and Arts (KNAW). Current address: Department of Computer Science, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands (han@wi.leidenuniv.nl). ‡ AT&T Labs-Research, Florham Park, NJ 07932 (westbrook@att.com). This research was done while the author was at the Department of Computer Science, Yale University, and was partially supported by National Science Foundation grant CCR-9009753. 1 Recently randomization has been used to derive polylogarithmic time algorithms for several fully dynamic graph problems [17]. 10 DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 11 deletions to a backtracking Undo, which removes the most recently added edge not yet removed. Dynamic backtracking problems appear to be an important research area for several reasons. The backtracking operation Undo is a common feature of interactive software systems. Also, backtracking search is a common search strategy in many logic and artificial intelligence applications. For example, maintaining 2-vertex-connected components (with backtracking) has been proposed as a way to improve search in Prolog [25]. Furthermore, dynamic graphs with backtracking suffice for many interactive system applications like, e.g., CAD/CAM systems and VLSI layout. Maintaining 2-vertex-connected components could potentially be used for problems in reliable network design, or for designing VLSI layouts. Previous work on backtracking addressed the Union-Find problem, in which the standard disjoint set operations of Union and Find are augmented by the backtracking operation Deunion. (A Deunion undoes the most recent Union that has not been undone.) Union-Find with backtracking is a central problem in the implementation of unification and backtracking search in the logic programming language Prolog. Mannila and Ukkonen [23], [24] first formalized and studied this problem and proposed several algorithms which Westbrook and Tarjan [32] subsequently analyzed: each operation can be performed in Θ(log n/ log log n) amortized time. Blum [3] gave a data structure for Union-Find without backtracking that runs in Θ(log n/ log log n) worst-case time per operation. As observed in [32], Blum’s data structure can be adapted to handle Deunions in the same time bound. Variants and extensions of this problem are studied in [11], [14]. In [30] it is observed that backtracking graph connectivity could be solved in Θ(log n/ log log n) time by a straightforward application of the backtracking UnionFind algorithm (as incremental graph connectivity can be solved by straightforwardly applying standard Union-Find). Tamassia [29] gave an algorithm for a hierarchical embedding problem related to VLSI design that is essentially an algorithm with Undo operations for maintaining an embedded planar st-orientable graph (such a graph is 2-connected if one edge (s, t) is added) under a restricted set of modifications; it thus gives a better time complexity than its fully dynamic counterpart for general, unrestricted embedded planar graphs [16] but only by a factor log n. In this paper, we consider maintaining the 2-edge and 2-vertex connectivity relations in dynamic graphs with backtracking, i.e., graphs subject to the modifications Insert Vertex (), which adds a new, isolated vertex to the graph; Insert Edge(u, v), which inserts a new edge between vertices u and v; and Undo, which undoes the effects of the most recent insertion not yet undone. We give algorithms and data structures that maintain a decomposition of a dynamic backtracking graph into its 2-edge and 2-vertex-connected components throughout any sequence of backtracking operations. Using our data structure we can answer Test(u, v) queries, which ask whether vertices u and v belong to the same 2-connected component, in Θ(log n) worst-case time. Our algorithms run in worst-case time Θ(log n) per operation and use Θ(n) space, where n is the current number of vertices (“existing”) in the graph. To our knowledge, the algorithms in this paper are the first nontrivial results for dynamic backtracking graph problems that yield a substantial improvement in time complexity over their fully dynamic counterparts. Our algorithms also solve the corresponding incremental problems in logarithmic worst-case time, improving the Θ(n) worst-case bounds on the algorithms given in [18], [33]. In fact, we present our results by first describing new incremental algorithms and then augmenting them to 12 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK support backtracking. For comparison, we mention several alternate approaches to solving the backtracking problem. The simplest one is to push each edge on a stack as it is added, popping the stack for each Undo. A test query is answered by copying the edges on the stack, constructing the graph, and running a standard biconnectivity algorithm [4]. Updates require Θ(1) time in the worst case, queries take Θ(m) time, and the space required is Θ(m), where m is the maximum number of edges ever in the graph. Another possibility is to use the techniques of persistence [6]. A normal data structure is ephemeral in the sense that after an update the old version is destroyed and replaced by the updated version. Using the techniques of Driscoll et al. [6], a pointer-based data structure in which all nodes have constant bounded in-degree and out-degree can be made fully persistent. In a fully persistent data structure all versions of the data structure can be accessed, and any old version can be updated to yield a new version. The amortized time per operation of the persistent data structure is equal to the worst-case time per operation of the underlying ephemeral data structure, and the space requirement of the persistent data structure is equal to the total number of pointer changes made in all versions of the ephemeral data structure. Applying persistence to a data structure for the incremental 2 connectivity problem gives a solution to the backtracking problem. The data structures of [19], [33] do not have constant bounded in-degree, and the worst-case time per operation is Θ(n). When we replace nodes with high in-degree with balanced binary trees, persistent versions of these data structures give a backtracking algorithm that runs in Θ(n log n) amortized time per operation and requires Θ(M n) space, where M is the total number of edge insertion operations. Applying persistence to the incremental data structures developed in this paper gives less dismal results: Θ(log n)2 amortized time per operation and Θ(n + M log n) space. (The additional factor of log n again arises from replacing nodes with high in-degree by balanced binary trees.) In contrast, our direct solution to the backtracking problem gives Θ(log n) time in the worst-case and only Θ(n) space. This paper is organized as follows. In section 3, we present a solution for 2edge connectivity that runs in Θ(log n) time per operation and Θ(n) space. Although obtaining these bounds for 2-edge connectivity is relatively simple, obtaining O(log n) bounds for 2-vertex connectivity requires rather more sophisticated data structuring and accounting. These are presented in section 4. There we first give an intermediate and simpler solution that runs in Θ(log2 n/ log log n) time per operation and then present the Θ(log n) solution. 2. Preliminaries. 2.1. Terminology. In this paper, we use the standard graph terminology in Harary [15]. Let G = (V, E) be a graph. A path is a sequence of vertices v0 , v1 , . . . , vk such that {vi , vi+1 } ∈ E, 0 ≤ i < k. Vertices v0 and vk are endpoints of the path; the remaining vertices are internal. Two vertices in V are connected if there exists a path between them. The connected components of G are the maximal subgraphs of mutually connected vertices. Let {u, v} be an edge of graph G whose removal disconnects the graph. Such an edge is called a bridge. The 2-edge-connected components of G are the connected components that remain after all bridges are removed. Two vertices are 2-edge connected if they belong to the same 2-edge-connected component. If u and v are 2-edge connected, then there are at least two edge disjoint paths between them. DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 13 Let u be a vertex whose removal disconnects G. Such a vertex is called a cutpoint. Two edges are called 2-vertex connected if they lie on a common simple cycle. Thus, 2-vertex connectivity is an equivalence relation on the edge set. The 2-vertexconnected components or blocks of G are the subgraphs of G induced by the edges in an equivalence class plus their end nodes. Two vertices are 2-vertex connected if they belong to the same 2-vertex-connected component. Thus, two vertices are 2-vertex connected if and only if there are at least two vertex-disjoint paths between them, and any two 2-vertex-connected components intersect at most at one cutpoint. Recall from [33] that for the 2-edge and the 2-vertex connectivity relation, Θ(n) 2-connected components may be merged in case of an edge insertion, and, similarly, Θ(n) new components may arise in case of an U ndo operation. Thus, it is possible to construct a sequence of operations in which each operation changes the number of 2-connected components by Θ(n). This is in contrast with connected components, where only two components may be joined or separated by an insertion or Undo operation. 2.2. Dynamic trees. In [28], Sleator and Tarjan presented their dynamic tree data structure. For later reference, we briefly describe the main features of this data structure. The data structure maintains a rooted tree with costs on each edge or, alternatively, on each vertex. It performs the following operations (among others) in Θ(log n) worst-case time: find root(u), which returns the root of the tree in which node u is contained; parent(u), which returns the parent of node u in the tree (if any); find min(u), which returns the edge of minimum cost on the path from u to the root; add cost(u, x), which adds value x to the cost of all edges on the path from u to the root; cut(u), which creates two trees from one by cutting the edge from u to its parent; link(u, v, x), which combines two trees into one by making u a child of v (it presumes that u is the root of a tree distinct from the tree containing v), where edge (u, v) has cost x; and evert(u), which reroots the tree containing u at u. If costs are associated with nodes, add cost(u, x) adds value x to the cost of all nodes on the path from u to the root, and find min(u) returns the node of minimal cost on the path from u to the root. As observed in [30], maintaining the connectivity relation with backtracking can be performed in Θ(log / log log n) time per operation and Θ(n) space by using a UnionFind structure with backtracking. We remark that the connectivity relation can also be maintained using dynamic trees in Θ(log n) time per operation. For each component we maintain a spanning tree in the obvious way and test whether u and v are in the same component by find root operations for u and v. We will not make the maintenance of the connectivity relation explicit in our algorithms for 2-edge and 2-vertex connectivity. 3. 2-edge connectivity. In this section, we describe algorithms and data structures for maintaining the 2-edge connectivity relation in dynamic graphs with backtracking. We first describe a data structure that performs 2-edge connectivity queries and edge insertions in Θ(log n) worst-case time, and we then extend this algorithm to handle backtracking. 14 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK Let T be a spanning tree of a graph G. An edge e ∈ T is covered if it lies on the fundamental cycle (with respect to T ) of some nontree edge f ∈ G. Lemma 3.1 (see [10]). Two vertices are in the same 2-edge-connected component of G if and only if all edges on the path in T between u and v are covered. Using this lemma, it is easy to solve the incremental 2-edge connectivity problem in worst-case Θ(log n) time per operation, using the variant of the dynamic tree data structure in which cost is attached to the tree edges. A spanning tree is maintained for each component of G and used for testing 2-edge connectivity. Each edge of G is classified as either a spanning tree edge, an essential nontree edge, or a nonessential nontree edge. The edge is a spanning tree edge if at the time of insertion it connected two previously unconnected components. The edge is an essential edge if at the time of insertion it reduced the number of 2-edge-connected components. Otherwise it is nonessential. At any time, let G′ be the subgraph of G consisting of all vertices of G, all spanning tree edges, and all essential nontree edges. Thus, G and G′ have the same 2-edge-connected components. The size of G′ is 2(n − 1) because each edge in G′ is either a spanning tree edge or an edge that reduced the number of 2-edge-connected components by 1, which can happen at most n−1 times. The cost of a tree edge e will be the number of nonspanning tree edges in G′ covering tree edge e. The operations on G defined in the introduction are implemented as follows. Insert Vertex (): Create a new single-node tree and return a pointer to the new node. Test(u, v): If u and v are in different components, return “no.” Otherwise perform evert(v) followed by find min(u). If the minimum value is zero, return “no,” otherwise “yes.” Insert Edge(u, v): If u and v are in different components, do evert(u) and perform link(u, v, 0), creating a new tree edge e with cost 0. Otherwise, compute Test(u, v). If the result is “no,” then evert(v) and add cost(u, 1). Otherwise do nothing. The correctness of these routines is easily seen by induction on the number of requests. The crucial observation is that the cost of an edge is exactly equal to the number of covering edges in the graph G′ , and that if there is an edge in G covering edge e, then there is an edge in G′ covering e. Each operation runs in worst-case time Θ(log n), since each performs a constant number of dynamic tree operations. So far, we have a data structure for the incremental problem. To support Undo, we utilize a backtrack stack. If an Insert Edge(u, v) or Insert Vertex () operation changes the number of components or 2-edge-connected components, then it is essential and G′ is augmented accordingly; a new record is pushed on the backtrack stack. The record contains the type of operation performed, the endpoints of the new edge in the case of an edge insertion, the name of the new vertex in the case of a vertex insertion, and a counter initialized to zero. This counter contains the number of nonessential, not yet undone insertions performed after the one described in the record (which is an essential one), and before the essential insertion described in the next record (if any). Thus, if an Insert Edge(u, v) operation adds an edge between two vertices that are already in the same 2-edge-connected component, the counter in the top stack-record is simply incremented. To perform Undo, proceed as follows. Examine the counter in the top record on the stack. If it is greater than zero, decrement the counter and terminate. Otherwise pop the top record. If the operation stored in this record is an Insert Vertex (), then DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 15 delete the appropriate vertex. If the operation is Insert Edge(u, v), then do evert(v) and add cost(u, −1). Perform find min(u). If it returns an edge e of cost −1 (meaning e is an edge that, when inserted, connected two previously unconnected components), then cut(u) is performed, while e is deleted. We obtain the following theorem. Theorem 3.2. A sequence of Test(u, v), Insert Vertex(), Insert Edge(u, v), and Undo operations can be performed in Θ(log n) worst-case time per operation and in Θ(n) space, where n is the current number of nodes. Proof. It is readily seen that the relation between G and G′ is maintained. To show the correctness of the backtracking procedure, it suffices to confirm that an Undo performed immediately after an insertion restores the data structure to its condition prior to the insertion. The time bound follows from [28]. The space complexity follows since the size of the stack (i.e., the number of records in it) is bounded by the size of G′ . 4. 2-vertex connectivity. In this section, we describe algorithms and data structures for maintaining the 2-vertex connectivity relation in dynamic graphs with backtracking. As in the previous section, we begin by describing a data structure that performs only 2-vertex test queries and edge insertions in O((log n)2 / log log n) worst-case time. This then serves as a basis for a backtracking algorithm with this time complexity. Subsequently, we present data structures and algorithms to achieve O(log n) time per operation. As in the case of 2-edge connectivity, our approach is to maintain a spanning tree of the graph and store information with the vertices and edges of the spanning tree that can be used efficiently to answer test queries. In the case of 2-vertex connectivity, however, there is no simple covering lemma, and our algorithms and data structures are consequently more complex. Our approach to 2-vertex connectivity is based on the following lemma. Lemma 4.1. Let T be a spanning tree of graph G. Two nodes are in the same block of G if and only if all tree edges on the path P between u and v are in the same block. Proof. If all edges on P are in the same block, then u and v are in the same block, since if an edge is in block B so are its endpoints. Conversely, assume u and v are in the same block B. Any simple path between u and v (such as P ) must be entirely contained inside B. Otherwise, it must pass out of B through some cutpoint, and by definition of a cutpoint it cannot return into B without going through the same cutpoint, contradicting the assumption that P is simple. To use the lemma we must find an efficient way to test 2-vertex connectivity along tree paths. We will use the dynamic tree data structure of Sleator and Tarjan with cost related to nodes as a basis. By itself, however, this data structure is insufficient for our needs. We augment the data structure to solve 2-vertex connectivity with backtracking. 4.1. The dynamic tree data structure of Sleator and Tarjan. The fundamental principle behind the data structure is a partitioning of tree edges into vertexdisjoint paths, called a path decomposition. An edge within a path is called solid, while a nonpath edge is called dashed. Dynamic tree operations are performed by manipulating the path partition so as to place relevant vertices in the same path. Each path p has two endnodes head(p) and tail(p), which are the nodes on p that are farthest from and nearest to the root, respectively. Let T be a tree rooted at r. There is a unique path decomposition of T defined by its heavy edges. Denote by s(v) the number of descendants of node v, and denote 16 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK by p(v) the parent of v. Let hu, vi denote a tree edge with v = p(u). Call edge hu, vi heavy if 2s(u) > s(v), otherwise light. Removal of light edges leaves a collection of disjoint paths. There are O(log n) light edges on any path to the root. The solid path decomposition maintained by the Sleator–Tarjan dynamic tree data structure is exactly the heavy path decomposition, except possibly during the execution of one of the dynamic tree operations. At the conclusion of each operation, however, the correspondence between solid paths and the unique heavy path decomposition is restored. The costs of nodes are examined and changed using only two functions: find-minon-path and add-cost-to-path. The former operation finds the minimum cost node on a solid path, and the latter increases the cost of all nodes on a single solid path. To perform a find min(u) operation, for example, the path from u to the root r must be turned into a single solid path to which find-min-on-path is applied. After the minimum is found, the heavy path decomposition is restored. Each solid path p is stored in a binary search tree Dp , where the leaves of the tree correspond to the nodes on the path so that in order on the tree corresponds to path order from head to tail. Each internal node of Dp has a “partial” cost. The cost of the solid path node v stored at leaf l of Dp is the sum of all the partial costs stored with the internal nodes on the path from the root of Dp down to the leaf node l. Thus, the cost of all nodes on the solid path p can be changed by ∆ by adding ∆ to the partial cost of the root of Dp . By maintaining minima of subtrees in Dp , the minimum on path p can be found in time linear in the depth of Dp . Using the binary tree data structure, a path p can be split at node v to give three new paths, p1 , v, and p2 , with p1 containing the former head of p and p2 containing the former tail. After the split, the cost of v can be determined in O(1) time. The inverse of splitting is a concatenation, which produces a single path p consisting of p1 , followed by v, followed by p2 . The path decomposition is manipulated by means of three functions: expose, conceal, and reverse [28]. An expose creates a solid path starting at a specified node v and ending at the root. The new path may not necessarily contain only heavy edges. A conceal takes a solid path p containing the root and possibly some light edges and modifies the collection of solid paths so that every edge incident with a node of p is solid if and only if it is heavy. A reverse operation reverses the direction of tree edges in a solid path ending at the root. Thus, for example, an evert at v is implemented by an expose of v, a reverse of the resulting path, and a conceal of the path now rooted at v. Let P be the path from node v to the root of T . The expose(v) operation turns all the dashed edges on P into solid edges and simultaneously turns all the solid edges incident to but not on P into dashed edges. Let hx, yi be a dashed edge on P (y = p(x)), and let hz, yi be the solid edge containing a sibling z of x. If y has no heavy child, there is no such edge. The process of making hx, yi into a solid edge and hz, yi into a dashed edge is called a splice, denoted splice(x). At the time of the splice, let px be the solid path containing x as its tail. A splice involves splitting the solid path p containing y into p1 , y, p2 (both p1 and p2 may be empty) and concatenating px , y, p2 . While y is a singleton, its cost can be obtained or updated in O(1) time. The expose operation performs the necessary splices in order from v to the root. The solid path initially containing v is split as necessary so that v has no descendant solid edge. The conceal operation is the inverse of expose. Given a solid path P containing the root, with head v, a conceal turns all the light edges on P into dashed edges, and DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 17 all the heavy edges incident to but not on P into solid edges. It thus restores the heavy path decomposition. Conceal processes P from its tail down. Let hx, yi be a solid light edge on P and let hz, yi be the dashed heavy edge incident on y. If y has no heavy child z, there is no such edge. The process of making hx, yi into a dashed edge and hz, yi into a solid edge is called a slice, denoted slice(x). Obviously, slice is the inverse of splice. At the time of the slice, let pz be the solid path containing z as its tail. A slice involves splitting the solid path P into p1 , y, p2 (both p1 and p2 may be empty) and concatenating pz , y, p2 . The conceal then continues down path p1 , which has tail x, unless p1 is empty. The method by which conceal determines which edges are light is quite clever and intricate. It involves keeping track of the number of descendants of dashed edges in the data structure. 4.2. A dynamic tree data structure for 2-vertex connectivity. We use the Sleator–Tarjan data structure as a basis for our data structure. Let T be a tree rooted at r. Given a path decomposition for T , we categorize the tree edges incident on a node v in three ways. Edge hv, wi is a parent edge if w = p(v) with respect to the current tree root. (The parent edge may be either solid or dashed.) Edge hu, vi is a solid child edge if it belongs to a solid path and v = p(u). All other edges are dashed child edges. A solid edge can change to a parent edge, and vice versa, via a reverse. A solid child edge can change to a dashed child edge, or vice versa, via an expose or conceal. No single operation, however, can change an edge incident on v from being a parent to being a dashed child. With each node v we associate an integer counter value c(v) that, roughly speaking, takes the place of the cost value maintained by the basic dynamic tree structure. The counter value differs from the cost value, however, in that it depends on both the current 2-connected components of G and the current path decomposition of the spanning tree T . The counter values satisfy the following: (i) if v has both a solid child edge and a parent edge, then c(v) is zero if these two edges are in different blocks of G, and positive (including ∞) otherwise; (ii) if v has no solid child edge or no parent edge, then c(v) = +∞. The counter condition implies that two edges belonging to the same solid path are in the same block if and only if all intervening path vertices have value greater than zero. Solid paths are implemented in the same manner as in the standard Sleator–Tarjan data structure, and counter values can be examined or set using the find-min-on-path and add-cost-to-path functions. The symbol +∞ indicates a positive number that cannot be changed by add-cost-to-path. (Below, we explain this further.) The counter value of node v says nothing about the relationship between dashed edges incident on v nor between dashed edges and solid edges incident upon v. An additional data structure handles these relationships. For each vertex v we maintain a block partition, Bv , of the edges {e1 , e2 , . . . , ek } adjacent to v. Let Bv (ei ) denote the set of Bv containing edge ei . Let y be the solid child edge, if any, of v, and let z be the parent edge, if any, of v. At all times, edges ei and ej belong to the same 2-vertex-connected component (block) of G if and only if at least one of the following holds: 1. Bv (ei ) = Bv (ej ). 2. Bv (ei ) = Bv (y), Bv (ej ) = Bv (z), and c(v) > 0. 3. As in 2, but with ei and ej exchanged. The block partition is subject to Unions, Finds, and eventually Deunions. For each tree edge e = hu, vi there is a record containing the names of its endpoints and pointers to two representatives, one for Bu and one for Bv . We denote these 18 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK representatives by ev,u (u) and eu,v (v), respectively. The two representatives of e contain back pointers to the record for e. This edge data structure is created and initialized when a new tree edge connecting two previously unconnected components is added. To implement edge insertions, we use the standard dynamic tree operations such as evert and link. Each of these operations is in turn implemented with O(1) invocations of the primitives reverse, expose, and conceal. We will also call these primitives directly in our implementations. As these primitives are executed, the block partitions and counter values must be modified to preserve the needed invariants. If the counter values and block partitions are valid prior to a reverse operation, they remain valid after a reverse. The other two primitives change the solid path decomposition, however, and hence may cause changes in our counter values and block partitions. The expose primitive uses the function splice, which makes a light dashed edge solid and a heavy solid edge dashed. The conceal primitive uses the function slice, which makes a light solid edge dashed and a heavy dashed edge solid. For our purposes, whether the edges are heavy or light does not matter. The block partition and counter values are modified as follows. Let y = hu, vi be a solid child edge of v which must be made dashed. Let z be the parent edge, if any, of v. If Bv (y) 6= Bv (z) and c(v) > 0, then unite Bv (y) and Bv (z). In any case, set c(v) = +∞. Let y = hu, vi be a dashed child edge of v which must be made solid (there is no solid child edge of v at this point). Let z be the parent edge, if any, of v. If Bv (y) 6= Bv (z), then set c(v) = 0; otherwise set c(v) = +∞. It is straightforward to verify by case analysis that these algorithms correctly maintain the counter values and block partitions through any sequence of queries and edge insertions. To facilitate access to the representatives for edges y and z in the above algorithms, each tree node v is augmented with pointers to the representatives in Bv of the solid edges incident to v, or the dashed edge from v to its parent, as appropriate. These pointers can be updated as part of the extended splice and slice in O(1) time. 4.3. Implementation of test and insertion operations. In this subsection we present the implementation of Test(u, v) and Insert Edge(u, v). (The implementation of Insert Vertex () is straightforward.) Test(u, v): 1. If u and v are in different components, return “no” and terminate. 2. Save the tree root r. 3. Evert the tree at v, making v the root. (Expose v and reverse the path from v to r.) 4. Expose u. 5. Perform find-min-on-path on the resulting solid path. Return “no” if the minimum value is 0; otherwise return “yes.” 6. Conceal u and evert the tree at r. Insert Edge(u, v): 1. If u and v are in different components, and hence different trees, evert at u, and perform Link(u, v). Add two singleton sets to the block partitions Bu and Bv , each representing edge {u,v}. Then terminate. 2. If u and v are in the same tree, compute Test(u, v). DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 19 3. If the result is “no,” then evert at v; expose u; increment counters on the resultant path with add-cost-to-path; conceal u. Lemma 4.2. The above implementations of Test(u, v) and Insert Edge(u, v) are correct. Proof. Each operation uses O(1) calls to the primitives add-cost-to-path, find-minon-path, expose, conceal, and reverse. By the discussion from the previous section, these operations are correct. Consider the Test(u, v) operation. Step 1 is trivially correct. Steps 2–4 construct a new path decomposition in which u and v are in the same solid path and u is the root of the tree. The primitives expose, conceal, and reverse construct correct counter values and block partitions, as defined in the previous section, for the new path decomposition. The endpoints of the path to which find-min-on-path is applied always have counter value +∞. Hence the find-min-on-path operation determines whether there is a zero on a node internal to the path, which in turn determines whether u and v are in the same 2-connected component. Step 6 restores the original path decomposition. The correctness of Insert Edge(u, v) follows from a similar argument. Step 3 increases all counter values along the path, so there is no longer a node of zero cost separating u and v. The final conceal will possibly change the path partition but will correctly construct new counter values and block partitions. Lemma 4.3. The above implementations of Test(u, v) and Insert Edge(u, v) run in Θ((log n)2 / log log n) worst-case time per operation. Proof. The implementations of add-cost-to-path, find-min-on-path, expose, conceal, and reverse given by Sleator and Tarjan run in Θ(log n) time in the worst case. In addition, the number of splices and slices performed per expose or conceal is Θ(log n) in the worst case. Node counter values can be obtained or set to a particular value in O(1) time when paths are split and concatenated during splices and slices. (We defer explaining how to implement +∞ for the moment.) There are O(1) Union and Find operations on block partitions during a splice or slice. Unions and Finds can be done in Θ(log n/ log log n) worst-case time per operation using Blum’s data structure [3]. Hence, an incremental algorithm that runs in Θ((log n)2 / log log n) worst-case time per operation is the result of this section. 4.4. The Undo operation. The algorithms for 2-vertex connectivity are more complicated than those for 2-edge connectivity, and the Undo operation is correspondingly more complex. Correct backtracking can be guaranteed by logging every change to a pointer or data field done in the course of an operation. By unwinding the log, each change can be exactly undone and the exact previous state of the data structure restored. This approach is space intensive, however. Our goal is to store a minimal amount of backtracking information. This means that the Undo will not restore the exact state of the data structure prior to the operation being undone. In particular, an Undo will restore the exact previous path decomposition, block partitions, counter values, and backtracking stack, but it will not necessarily restore the previous states of the data structures used to implement the solid paths and block partitions. There is no conceptual problem, since several different data structure states may represent the same solid path or block partition. To prove the correctness of our implementation of backtracking, it suffices to show that by using the information stored on the backtracking stack during an update 20 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK operation, the update operation can be immediately undone. That is, no matter what the states of the data structures implementing the solid paths and block partitions, the previous path decomposition, block partitions, counter values, and backtracking stack can be restored. The correctness of the whole algorithm then follows by induction on the number of operations, since the answers to biconnectivity queries are determined only by these attributes. The implementation of Test(u, v) given in subsection 4.3 may change the block partition, since it executes expose and conceal operations. It is most convenient to undo these changes immediately after the test operation is completed. This can be done in a brute-force fashion by logging each change to any pointer or data field and by storing the location of the field and the previous value. The old values can be restored by going backward through the log. Since Test(u, v) runs in O((log n)2 / log log n) time in the worst-case, the total size of the log and the time to restore the previous values is O((log n)2 / log log n). A better approach is to observe that the implementation of Test(u, v) is almost the same as the implementation of Insert Edge(u, v). Hence we may use the Undo algorithm for Insert Edge(u, v) developed below with only minor modification. Each time an essential edge is inserted, a record is pushed onto the backtracking stack. As in the algorithm for 2-edge connectivity, an edge is essential only if its insertion reduces the number of components or 2-vertex components. Each record contains a counter that indicates the number of not-yet-undone nonessential edge insertions performed after the essential one described in the record. Upon an insertion of a nonessential edge, the only change to the data structure is to increment the counter in the top record. Upon an Undo, the counter in the top record is examined. If greater than zero, it is simply decremented. This suffices to restore the state of the data structure prior to the most recent insertion. If the insertion is essential, a new record with a zero counter is pushed on the stack. The record describes the operation performed and the effect: either a decrease in the number of components or a decrease in the number of blocks. The record also describes each of the O(1) “suboperations,” expose, reverse, conceal, and add-cost-topath, that were done. A reverse can be undone by another reverse on the same path. The effect of incrementing counters by add-cost-to-path can be undone by using add-cost-to-path to add −1 to the same path. Undoing the effects of an expose or conceal is more complicated. For each such operation, the main backtracking record contains a substack of subrecords. This substack will record changes to the block partitions that occur during the operation. Suppose we expose v and we want to immediately undo the expose. The heavy path decomposition that existed prior to the expose can be restored by an immediate conceal. Since the heavy path decomposition is unique, each splice occurring in the expose will be exactly undone by a slice in the conceal. The conceal algorithm processes each sliced edge in the reverse order that it was spliced by the expose. Similarly, a conceal operation, which travels down a solid path turning light edges on the path into dashed edges, can be immediately undone by an expose starting at the last vertex on the solid path traversed by the conceal. Each slice done by the conceal is exactly undone by a splice in the expose. To restore the counter values and block partitions, it suffices to show how to modify splice to undo the effects of an immediately preceding slice and vice versa. The function splice makes a heavy solid child dashed, if there is one, and then makes DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 21 a light dashed child solid. The function slice makes a light solid child dashed, and then makes a heavy dashed child solid, if there is one. With respect to the counter and block partition, it does not matter whether the edges are heavy or light. Hence it suffices to show how to undo the effect of turning a dashed edge solid and how to undo the effect of turning a solid edge dashed. The block partitions are managed by an algorithm for set Union with backtracking that supports the operations Find, Union, and Deunion. As described in section 1, these operations can be implemented in Θ(log n/ log log n) time, either worst case or amortized. Let y = hu, vi be solid child edge of v which must be made dashed as part of a normal edge insertion. Let z be the parent edge, if any, of v. If Bv (y) 6= Bv (z) and c(v) > 0, then unite Bv (y) and Bv (z); push a new subrecord on the substack, labeled with the name “v”; store the current value of c(v) into the subrecord; and set c(v) = +∞. In all other cases, simply set c(v) = +∞. If x is being made dashed during an Undo, simply set c(v) = +∞. Let y = hu, vi be a dashed child edge of v which must be made solid. Let z be the parent edge, if any, of v. If Bv (y) 6= Bv (z), then set c(v) = 0; otherwise set c(v) = +∞. If y is being made dashed as part of an Undo, then examine the top subrecord on the substack. If it is labeled “v,” then pop the subrecord, execute a Deunion in the block partition for v, and set c(v) equal to the value stored in the subrecord. By inspecting these routines, one may verify that after a normal solid-to-dashed operation, an immediate dashed-to-solid operation in Undo mode will correctly restore the previous block partition and counter value. Similarly, after a normal dashedto-solid operation, an immediate solid-to-dashed operation in Undo mode correctly restores the previous state. Recall that to make a dashed child solid, v can have no other solid child, and hence c(v) = +∞. Theorem 4.4. A sequence of m Test(u, v), Insert Vertex(), Insert Edge(u, v), and Undo operations can be performed in Θ((log n)2 / log log n) worst-case time per operation and in Θ(n) space. Proof. As discussed above, the correctness of the Undo algorithm follows by induction, since sufficient information is stored on the backtrack stack to allow each function that changes the path decomposition or counter values to be immediately undone. The running time of an Undo is order of the running time of the operation being undone, which is Θ((log n)2 / log log n) in both cases. Next we consider the space utilization. After O(n) essential edge insertions, all edges are in the same 2-vertex component. Since O(1) records are only pushed on the backtrack stack if components or blocks are joined, and O(1) subrecords are only pushed on a substack if sets in the block partitions are united, the total space required on the backtrack stack is O(n) main records plus O(n) total subrecords. Obviously, Ω(n) space is required just to store the vertices. We use +∞ to ease designing and analyzing the algorithm. It ensures that the endpoints of a path always have positive counter value and so cannot interfere with the operation find-min-on-path. If v is internal to a solid path, and both incident solid edges belong to the same set of the block partition, then c(v) is also +∞. This is an easy way to ensure that no amount of counter decrements performed during Undos will accidentally reduce c(v) to zero, thereby violating the counter condition. For theoretical purposes, there is no difficulty in assuming that the arithmetic operations of the target machine are augmented to handle +∞. For actual implementations, 22 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK we can dispose of +∞ as follows. Say that the value of c(v) is equivalent to +∞ if it is greater than the number of not-yet-undone counter increments that have been applied to v. Since this number is at least zero, a value equivalent to +∞ is always positive. Modify the algorithms so that wherever a counter was previously set to +∞ it is now set to one more than the current size of the backtrack stack. One may show the correctness of this modified algorithm by imagining that the original and modified algorithms are run side by side on the same input and verifying by induction that a counter in the modified algorithm is equivalent to +∞ exactly when the corresponding counter in the original algorithm is equal to +∞. 4.5. An Θ(log n) algorithm for 2-vertex connectivity. We improve the worst-case running time to Θ(log n) per operation by using globally biased binary search trees [2], [28] to implement the block partitions. In the Sleator–Tarjan data structure, the solid paths are already implemented by globally biased trees. In a biased binary tree, A, each node has a specified weight w(x). Let Av be the subtree of A rooted at v; the size of a biased-tree node, s(v), is defined as s(v) = P w(u). Define the rank of node v, r(v), as log s(v). We use r(A) to denote the u∈Av rank of the root of biased binary tree A. The biased binary tree data structure has the following properties. 1. The depth of node v ∈ A is O(r(A) − r(v)). 2. For v ∈ A, the operation split(v) produces trees A1 , v, A2 and requires time O(r(A) − r(v)). 3. The operation concat(A1 , A2 ) concatenates two trees A1 and A2 in time O(max{r(A1 ), r(A2 )} − max{r(right(A1 )), r(left(A2 ))}), where left(A) and right(A) denote the leftmost and rightmost nodes, respectively, in tree A. Note that concatenating trees A1 , v, and A2 to give tree A requires time O(r(A)− r(v)). For this can be done by concat(concat(A1 , v), A2 ), requiring time O(r(A1 ) − r(v)) + O(max(max(r(A1 ), r(v)) + 1, r(A2 )) − r(v)) which is O(r(A) − r(v)). Thus, splitting a tree A into A1 , v, A2 by split(v) can be undone by concatenating A1 , v, and A2 both within the same time bound O(r(A) − r(v)) and vice versa. Recall that each tree edge hu, vi has two representatives, one each in the block partitions for u and v. Denote these by ev,u (u) and eu,v (v), respectively. If hu, vi is a dashed child edge of v, define w(eu,v (v)) to be the number of descendants of u in the spanning tree T . If hu, vi is a parent or solid edge of v, define w(eu,v (v)) to be zero. These weights are already explicitly maintained in the standard Sleator–Tarjan data structure and can be accessed in O(1) time when needed by our modified algorithms. Each block partition set B ∈ Bv is implemented by a header that contains pointers to one or two biased binary trees. The binary trees contain the dashed child edges of v, using the weights given above. Each tree root contains a back pointer to the set header. Set B may be stored in two trees if it contains both the parent and solid child edges of v, and it is stored in one tree otherwise. In case of two trees, one of them precedes the other as indicated by the pointers (distinguished as a left and a right pointer). We usually indicate this order by using indices with the trees, e.g., X1 ,X2 . Furthermore, each parent or solid child edge e contains a pointer to the set header of Bv (e). Hence testing whether Bv (e1 ) = Bv (e2 ) can be done by obtaining and comparing the corresponding set header for e1 and e2 , either by using a direct pointer to a set header in case of a parent or solid child or by traversing the root path and using the pointer from the root to the set header in the case of a dashed child. DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 23 For biased binary tree A, we denote by h(A) the node in A with maximum weight. In the case of a tie, the node with largest index is taken. (We assume that nodes have unique names 1, . . . , n). The following invariant is maintained. If B ∈ Bv contains neither a parent nor a solid edge, then the (single) biased tree A for set B satisfies right(A) = h(A). If the counter values and block partitions are valid prior to a reverse operation, they remain valid after the path reversal. This is because the block partition and counter values are modified only when a dashed child turns into a solid child. The primitives expose and conceal, however, may cause changes to counter values and block partitions via the functions splice and slice. Recall from sections 4.2 and 4.4 that it suffices to show how to turn a solid edge dashed, how to turn a dashed edge solid, and how to undo each of these actions. We first examine how the block partition and counter values change when a solid edge y = hu, vi is made dashed. We assume that v is not the root, and that the parent edge of v is z. If v is the root, execute the following algorithm as if z exists but forms a singleton block, i.e., Bv (z) = {z}. We will use x, y, and z to denote both edges incident on v and their representatives in the block partition trees of v. 1. If y is to be made dashed as part of a normal edge insertion, then test if Bv (y) 6= Bv (z) and c(v) > 0. If so, then let Y be the tree of Bv (y) and Z the tree of Bv (z). Unite sets Bv (y) and Bv (z) by creating a new set header with two pointers to Y, Z, in that order. Push a new subrecord on the substack, labeled with the name “v,” and copy c(v) into the subrecord. Continue with the rest of this routine. 2. If Bv (y) = Bv (z) then let Y1 , Y2 be the two trees of set Bv (y). Concatenate (Y1 , y, Y2 ). 3. If Bv (y) 6= Bv (z) (c(v) = 0) then let Y be the tree for Bv (y). Let Y ′ = concat(Y, y). Find y ′ = h(Y ′ ). Perform split(y ′ ), giving Y1 , y ′ , Y2 , followed by concatenate (Y2 , Y1 , y ′ ). The result is the new biased tree for Bv (y). 4. Set c(v) = +∞. We need to augment the biased tree data structure to allow us to search for the maximum weighted node h(Y ′ ) in step 3. This can be done in a standard fashion, storing at each internal node of the biased tree the maximum weight/index pair of any descendant. Using this information, node h(Y ′ ) can be found in time proportional to its depth, which is O(r(Y ′ ) − r(h(Y ′ ))). Next we examine how the block partition and counter values change when a dashed edge x = hu, vi is made solid. As before, if v is the root, execute the following algorithm as if z exists but forms a singleton block, i.e., Bv (z) = {z}. 1. Let X be the biased tree of Bv (x). Perform split(x), giving X1 , x, X2 . (Possibly X2 = ∅.) 2. If Bv (x) 6= Bv (z), then set c(v) = 0, execute concat(X2 , X1 ), and store a pointer to the result in the header for Bv (x). 3. If Bv (x) = Bv (z), then set c(v) = +∞ and store pointers to X1 , X2 in that order. 4. If x is being made dashed as part of an Undo, then examine the top subrecord on the substack. If it is labeled “v,” then pop the subrecord. Set c(v) equal to the value stored in it. Undo the Union indicated by the subrecord creating two headers for Bv (x) and Bv (z), respectively. Make X1 the tree for Bv (x) and X2 the tree for Bv (z). One may easily verify that these routines maintain valid counters and block partitions if no Undo operations are performed. We defer for the moment a discussion of 24 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK the Undo operation, and turn to an analysis of the running time. Lemma 4.5. The functions splice(u) and slice(u) run in time O(1 + log s(v) − log s(u)), where v = p(u), s(u), and s(v) are the number of descendants of u and v, respectively, given the current root of the spanning tree T . Proof. Consider the function that makes a solid child edge y dashed. Since the set headers for y and z can be accessed in O(1) time, all set equivalence tests take O(1) time. Step 1 requires O(1) time. Step 2 requires time O(r(Y ′ ) − r(y)) for the concatenation, where Y ′ is the resulting tree. Step 3 requires time O(r(Y ′ ) − r(y)) for all the concatenation and split operations, since r(y ′ ) ≥ r(y) by definition, and in the final concatenation y is rightmost in Y2 . Consider the function that makes a dashed child edge x solid. The split in step 1 takes time O(r(X) − r(x)). After the split, we have O(1) time access to the set header for Bv (x). The concatenation in step 2 requires time O(r(X) − r(h(X))) = O(r(X) − r(x)). This follows because h(X) was rightmost in X by the invariant and hence in X2 (if X2 6= ∅), and r(h(X)) ≥ r(x), by definition. Steps 3 and 4 take time O(1). Splice and slice both perform one each of these operations. In both cases, the maximum rank of any tree in the block partition is log s(v). Hence the cost of a splice or slice is O(1 + log s(v) − r(x) − r(y)). In the case of a splice, r(x) = log s(u) by definition, and since y is a heavy child, w(y) ≥ s(v)/2. It follows that a splice takes O(1 + log s(v) − log s(u)) time. In the case of a slice, r(y) = log s(u) and w(x) ≥ s(v)/2 for the analogous reason. This implies that slice takes O(1 + log s(v) − log s(u)) time. Lemma 4.6. Using globally biased trees, an expose or conceal operation requires time Θ(log n). Proof. An expose(u) operation consists of a sequence of splices along the tree path from u to the root. Let u1 , u2 , . . . , uk be the sequence of nodes at which a splice occurs, and let vi = p(ui ) for all i. By Lemma 4.5 the cost of the expose Pk is i=1 O(1 + log s(vi ) − log s(ui )). Since s(vi ) ≤ s(ui+1 ) for 1 ≤ i ≤ k − 1, the sum telescopes to O(k + log n). The heavy path decomposition guarantees that k = O(log n), although k = Ω(log n) in the worst case. A conceal operation is implemented by a sequence of slices going down the tree. Again the costs telescope for a total of Θ(log n). Now consider the Undo operation. As in section 4.4, it suffices to verify that after a normal solid-to-dashed operation, an immediate dashed-to-solid operation in Undo mode will correctly restore the previous block partition and counter value. Similarly, after a normal dashed-to-solid operation, an immediate solid-to-dashed operation in Undo mode correctly restores the previous state. Suppose a solid edge is made dashed. Either steps 2 and 4, steps 3 and 4, or steps 1, 2, and 4 of the solid-to-dashed routine are executed. Steps 2 and 4 of solidto-dashed will be undone by steps 1 and 3 of dashed-to-solid. Steps 3 and 4 of solid-to-dashed will be undone by steps 1 and 2 of dashed-to-solid. Finally, steps 1, 2, and 4 of solid-to-dashed are undone by steps 1, 3, and 4 of dashed-to-solid. Similarly, suppose a dashed edge is made solid. Either steps 1 and 2 or steps 1 and 3 of the dashed-to-solid routine are executed. Steps 1 and 2 of dashed-to-solid are undone by steps 3 and 4 of solid-to-dashed. Steps 1 and 3 of dashed-to-solid are undone by steps 2 and 4 of solid-to-dashed. Theorem 4.7. A sequence of m Test(u, v), Insert Vertex(), Insert Edge(u, v), and Undo operations can be performed in Θ(log n) worst-case time per operation and DYNAMIC 2-CONNECTIVITY WITH BACKTRACKING 25 in Θ(n) space. Proof. The running time follows from Lemma 4.6 and the previously established running time for the Sleator–Tarjan data structure. The space bound follows from the proof of Theorem 4.4. 5. Remarks. The only lower bound known for backtracking problems is one of Ω(log n/ log log n) for a disjoint set union with backtracking [32]. This bound can be applied to any class of algorithms for backtracking graph problems that keep disjoint data structures for distinct components of the graph. Our algorithms fall into that class. Hence there remains a gap of Θ(log log n) between the known upper and lower bounds for 2-connectivity with backtracking. Note that in one operation, Ω(n) 2-connected components may be joined (or unjoined), which makes the problem apparently harder disjoint set union, where only two sets can be joined by a single step. Other interesting open problems are backtracking algorithms for 3-connectivity and general planarity testing. The latter has as an application, e.g., VLSI design, amongst others. We anticipate that our algorithms herein provide a basis for efficient solutions of these problems. Acknowledgments. We thank the anonymous referees for helpful comments. REFERENCES [1] G. D. Battista and R. Tamassia, On-line maintenance of triconnected components with spqrtrees, Algorithmica, 15 (1996), pp. 302–318. [2] S. Bent, D. D. Sleator, and R. E. Tarjan, Biased search trees, SIAM J. Comput., 14 (1985), pp. 545–568. [3] N. Blum, On the single-operation worst-case time complexity of the disjoint set union problem, SIAM J. Comput., 15 (1986), pp. 1021–1024. [4] T. Cormen, C. Leiserson, and R. Rivest, Introduction to Algorithms, McGraw-Hill, New York, 1990. [5] G. Di Battista and R. Tamassia, On-line planarity testing, SIAM J. Comput., 25 (1996), pp. 956–997. [6] J. Driscoll, N. Sarnak, D. D. Sleator, and R. E. Tarjan, Making data structures persistent, J. Comput. Sys. Sci., 38 (1989), pp. 86–124. [7] D. Eppstein, Z. Galil, G. Italiano, and A. Nissenzweig, Sparsification: A general technique for dynamic graph algorithms, in Proc. 33rd Symp. of Foundations of Computer Science, 1992. [8] D. Eppstein, Z. Galil, G. Italiano, and T. Spencer, Separator based sparsification I. Planarity testing and minimum spanning trees, J. Comput. Syst. Sci., 52 (1996), pp. 3–27. [9] G. N. Frederickson, Data structures for on-line updating of minimum spanning trees, with applications, SIAM J. Comput., 14 (1985), pp. 781–798. [10] G. N. Frederickson, Ambivalent data structures for dynamic 2-edge-connectivity and k smallest spanning trees, SIAM J. Comput., 26 (1997), pp. 484–538. [11] Z. Galil and G. F. Italiano, Data structures and algorithms for disjoint set union problems, Computing Surveys, 23 (1991), 319–344. [12] Z. Galil and G. F. Italiano, Fully dynamic algorithms for 2-edge connectivity, SIAM J. Comput., 21 (1992), pp. 1047–1069. [13] Z. Galil and G. F. Italiano, Maintaining the 3-edge connected components of a graph on-line, SIAM J. Comput., 22 (1993), pp. 11–28. [14] A. Apostolico, G. F. Italiano, G. Gambosi, and M. Talamo, The set union problem with unlimited backtracking, SIAM J. Comput., 23 (1994), pp. 50–70. [15] F. Harary, Graph Theory, Addison-Wesley, Reading, MA, 1972. [16] G. F. Italiano, J. A. La Poutré, and M. H. Rauch, Fully dynamic planarity testing in planar embedded graphs, in Algorithms - ESA ’93, Lecture Notes in Computer Science 726, T. Lengauer, ed., Springer-Verlag, Berlin, 1993, pp. 212–223. [17] V. King and M. Rauch-Henzinger, Randomized dynamic algorithms with polylogarithmic time per update, in Proc. 27th ACM Symp. on Theory of Computing, 1995, pp. 519–527. 26 JOHANNES A. LA POUTRÉ AND JEFFERY WESTBROOK [18] J. A. La Poutré, Maintenance of 2- and 3-Connected Components of Graphs, part ii: 2- and 3-Edge-Connected Components and 2-Vertex-Connected Components, Technical Report RUU-CS-90-27, Utrecht University, 1990. [19] J. A. La Poutré, Dynamic Graph Algorithms and Data Structures, Ph.D. thesis, University of Utrecht, Netherlands, 1991. [20] J. A. La Poutré, Maintenance of triconnected components of graphs, in Proc. Int. Colloquium on Automata, Languages, and Programming (ICALP ’92), Lecture Notes in Computer Science 623, Springer-Verlag, New York, 1992, pp. 354–365. [21] J. A. La Poutré, Alpha algorithms for incremental planarity testing, in Proc. 26th ACM Symp. on Theory of Computing, 1994, pp. 706–715. [22] J. A. La Poutré, J. van Leeuwen, and M. H. Overmars, Maintenance of 2- and 3-edgeconnected components of graphs, Discrete Math., 114 (1993), pp. 329–359. [23] H. Mannila and E. Ukkonen, On the complexity of unification sequences, in Third International Conference on Logic Programming, Lecture Notes in Computer Science 225, Springer-Verlag, New York, 1986, pp. 122–133. [24] H. Mannila and E. Ukkonen, The set union problem with backtracking, in Proc. 13th International Colloquium on Automata, Languages, and Programming (ICALP 86), Lecture Notes in Computer Science 226, Springer-Verlag, New York, 1986, pp. 236–243. [25] G. Port, Private communication, 1988. [26] M. Rauch Henzinger, Fully dynamic biconnectivity in graphs, Algorithmica, 13 (1995), pp. 503–538. [27] M. Rauch, Improved data structures for fully dynamic biconnectivity, in Proc. 26th ACM Symp. on Theory of Computing, 1994, pp. 686–695. [28] D. D. Sleator and R. E. Tarjan, A data structure for dynamic trees, J. Comput. System Sci., 26 (1983), pp. 362–391. [29] R. Tamassia, On-line planar graph embedding, J. Algorithms, 21 (1996), pp. 201–239. [30] J. Westbrook, Algorithms and Data Structures for Dynamic Graph Problems, Ph.D. thesis, Department of Computer Science, Princeton University, Princeton, NJ, October 1989. [31] J. Westbrook, Fast incremental planarity testing, in Proc. Int. Symp. on Automata, Languages and Programming (ICALP ’92), Lecture Notes in Computer Science, SpringerVerlag, New York, 1992. [32] J. Westbrook and R. E. Tarjan, Amortized analysis of algorithms for set union with backtracking, SIAM J. Comput., 18 (1989), pp. 1–11. [33] J. Westbrook and R. E. Tarjan, Maintaining bridge-connected and biconnected components on-line, Algorithmica, 7 (1992), pp. 433–464.