Pathfinding Visualizer
Pathfinding Visualizer
in
By
to
i
Candidate’s Declaration
I hereby declare that the work presented in this report entitled “PathFinding Visualizer” in
partial fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Computer Science and Engineering/Information Technology submitted in
the department of Computer Science & Engineering and Information Technology, Jaypee
University of Information Technology Waknaghat ,is an authentic record of my own work
carried out over a period from August 2021 to December 2021 under the supervision of Dr.
Himanshu Jindal(Assistant Professor(SG) , Computer Science and Engineering and
Information Technology).
The matter embodied in the report has not been submitted for the award of any other degree
or diploma.
Meenakshi Sharma,181419
This is to certify that the above statement made by the candidate is true to the best of my
knowledge.
(Supervisor Signature)
Dr. Himanshu Jindal
Assistant Professor (SG)
Computer Science and Engineering and Information Technology
Dated: 14 May,2022
ii
Acknowledgements
I would like to express my sincere gratitude to my project guide “Dr. Himanshu Jindal” for
giving me the opportunity to work on this topic. It would never be possible for me to take
this project to this level without his innovative ideas and his relentless support and
encouragement.
I also thank our family, friends and all the faculties who gave their unfiltered suggestions and
feedback and helped me face all the challenges and hurdles which came along during the
project.
iii
Table of Contents
iv
3.2.5 D* Search ……………………………… 46
3.2.6 IDA* AND JPS ………………………... 48
3.2.7 Maze Generation Algorithm …………… 49
3.3 Animations and UI ………………………………. 58
References ………………………………………………………. 99
Appendix ……………………………………………………… 100
v
List of Abbreviations
vi
List of Figures
vii
Figure 40 Generating a 30x20 Maze using Prim’s Algorithm 54
Figure 41 Steps of Recursive Division 55
Figure 42 Recursive Maze Generation 56
Figure 49 Animations and their Types 61
Figure 62 Locust Swarm Algorithm for Path Finding 89
Figure 64 3D Simulation of Path Finding Visualizer 93
viii
List of Graphs
ix
List of Tables
x
Abstract
Learning by Graphics and Visualization is the most effective pedagogy to understand any
algorithm in the field of computer science. Theoretical concepts when learnt in a visually
pleasing manner tend to interest us more and it is through these dynamic visualizations that
we can see the underlying steps involved in any algorithm. This project intends to visualize
all the shortest path algorithms that find the most optimal path from a source to destination
node subject to constraints of time and cost. Experimentation with different sets and
combinations of path finding algorithms help us get the most efficient applications of these
algorithms, which can be useful in a variety of real-life applications like traffic congestion
management, digital mapping services, social networking applications, robotics, game
development and much more. The functionality consists of various algorithms like BFS,
DFS, Dijkstra, A* Search, IDA* Search, Jump Point Search, and test simulations where all
the above algorithms are compared by common parameters to get the most optimal path
under each environment,
xi
Chapter 1
Introduction
1.1 Introduction
Even though the example given in this segment can also be understood as an
algorithm stable visualization, it is perhaps more convenient to speak about
algorithmic idea visualization (AIV). Once more, there is no general method of AIV,
because the underlying ideas of different algorithms in different fields have nothing in
common, and each idea is unique and requires uncommon method of representation
by dynamic graphic means. Well, in case, there is one general method of AIV. Even
though very little is known about productive mental process that leads to discovery of
new algorithms, we understand (based on our introspection) that a researcher
visualization is perhaps often based, as the word recommend, on mental images - and
AIV is just a straightforward projection of such mental images to a demonstration of a
computer. Due to the space limitation, we give just one example - Fortune’s algorithm
(Fortune, 1987) for Voronoi diagram in the plane. There are several animations of the
algorithm in the web, the reader is invited to look at them. The Voronoi diagram is
eventually drawn, but the animations totally give no idea what the moving arcs mean
and why and how they build the diagram. The algorithmic idea behind the method is
following imagine the plane containing sites are enclosed as a horizontal plane into
the 3-dimensional space. For each site, create a circular cone that has a vertical axis
and uses the site as its apex. Observe the cone surfaces vertically from the limitless (to
avoid effects of perspective). The junction of cones project to the site plane as the
Voronoi diagram we are looking for. Moreover, if the “mountains” of the cones are
swept by an inclined plane, the junction of the plane with the clear parts of the cones
appear as the arcs that are visual in the planar animations.
1
Figure 1: 3D Visualization of Bellman Ford Algorithm
2
1.2 Problem Statement
3
Figure 4: 3 Phases of Path Finding, Discretization
4
Figure 7: Solve Convex Partitioning Problem
5
Figure 10: Heuristical Improvements
Path locating normally refers to locating the shortest direction among any two
locations. Many current algorithms are designed exactly to solve the shortest path
trouble consisting of Genetic, Floyd set of rules. This approach will visit unwanted
nodes which might not be the nice path, and this additionally increases the time taken
to discover the quality direction to our vacation spot. It might also advise us longer
paths to our destination as these are uninformed algorithms.
Thus, given source and destination nodes we need to devise the minimal and optimal
path connecting them, considering the obstacles in between in the most efficient
manner possible.
1.3 Objectives
6
1.4 Methodology
Project Design and Implementation is recorded in seven phases. These sections cover
all project steps, from data collection and processing to user output.
The 7 categories are:
1. Graph Matrix Construction.
2. Added Walls, Obstacles like bomb nodes, weight nodes.
3. Design the JS Scripts and integrate the Graph Algorithms.
4. Measure and Improve Finder Performance.
5. Improved Animation themes, design, and UI.
6. Added Speed, Controls and associated visuals, Different Heuristics and Mazes,
Patterns
7. Adding Different Location Features, Search Parameters and Combinations
Post these we evaluated the project on different simulations and recorded the results.
7
implemented using a grid-based graph, set of nodes and edges, representations in the
algorithm. Usually, a grid is superimposed over a map and then the graph is used to
find the optimal path. Most widely used representations are square tile grid which can
either be accessed as a 4-way path or 8- way path. Both have their own advantages
and disadvantages. Grids are considered simple and easy to implement and are
commonly used by researchers. Other representations are Hexagonal grid, Triangular
grid, Navigation Mesh, and Waypoints. Various types of map representations are
discussed below briefly:
1.6.1 Tile Grids: The composition of the grid includes vertices or points that are
connected by edges. Basically, grids uniformly divide the map world into
smaller groups of regular shapes called “tiles”. The movement in square tile
grids can either be 4-way (no 4-diagonal movement) or 8-way (diagonal
movement). The second most widely used grid representations are Hexagonal
grids. Hexagonal grids are like square grids with the same properties and take
less search time and reduced memory complexities. Triangular grids are not
popular among game developers and researchers, but some methods are
proposed to reduce the search effort and time consumption
8
1.6.2 Navigation Mesh: A navigation mesh is a connected graph of convex polygons,
where the polygon is a node in a graph, also known as navmesh. Polygons represent a
walkable area, thus movement in any direction is possible within the polygon. The
map is pre-processed to generate nav-mesh and then the path can be found by
traversing polygons (from polygon consisting of start point to polygon consisting of
goal point). The benefits of using navigation mesh are that it reduces the number of
nodes in the graph as the large walkable area can be represented as a single convex
polygon, reduces the memory required to store pre-processed map, and increases the
speed of pathfinding.
1.6.3 Waypoints: A waypoint can be defined as a point along the path which can be
marked manually or can be automatically computed. The purpose of waypoints is to
minimize the path representation as the shortest path can be pre-computed between
any two points. Therefore, certain optimization techniques are developed to compute
the path using waypoints. The main advantage of waypoints can be in a static world as
the map does not change, so the shortest paths between two waypoints can be pre-
computed and stored, reducing the time to calculate the final path after execution
9
Figure 13: Way points
1.7 Algorithms
For finding a path between two nodes in each graph a search algorithm is required.
Many search algorithms have been developed for graph-based pathfinding.
Pathfinding algorithm generally finds the path by expanding nodes and neighboring
nodes according to some given criteria. Pathfinding algorithms can be broadly divided
into two categories: Informed and Uninformed pathfinding Algorithms.
10
distance. The function is given below: h(x) = sqrt (|x1 – x2| + |y1 – y2|) In
uniform cost search the next node is selected based on the cost so far, so the
lowest cost node gets selected at each step. It is complete and optimal but not
efficient as it takes lot of time to explore nodes.
This report consists of 6 units. The first unit defines the shortest path finding problem
and its sub problems. It discusses different map representations and main types of
algorithms. The second unit analyses the literature findings and highlights the gap in
the research papers. The third unit describes analytical and computational model
development and implementation details. The fourth unit is a comparative study of
different algorithm under simulated environments to generate a set of tests to help
determine efficiency of algorithms. The fifth unit concludes the findings and defines
the future scope of the project in addition to the real-world applications of the
algorithms hence studied under test.
11
Chapter 2
Literature Review
DATA STRUCTURE
LIST OF PARTICIPANTS
By far the most common data structure for storing graphics is the adjacency
list. An adjacency list is an array of lists, each of which contains the neighbors
of one of the vertices (or neighbors if the graph is directed). For an undirected
graph, each edge uv is stored twice, once in the neighborhood list of u and once
in the neighborhood list of v; for directed graphs, each edge uv is stored only
once, in the list of neighbors ending in u. For both types of graphs, the overall
space required for the adjacency list is O(V + E). There are several different
ways to represent these neighbor lists, but the standard implementation uses a
simple linked list. The resulting data structure allows us to enumerate (outside)
neighbors of a node v in O (1 + deg(v) time); just scan the list of neighbors
from v. Similarly, we can determine if uv is an edge in O(1 + deg(u)) time
scanning the list of u's neighbors. For an undirected graph, we can improve the
arrival time to O(1 + min {deg(u), deg(v)}) by traversing the neighbor lists of
u and v simultaneously, stopping when we position the edge, i.e. when we fall
from the end of the list
12
STORING GRAPHS VIA ADJACENCY MATRICES
The other standard data structure for graphs is the adjacency matrix, first proposed by
Georges Brunel in. The adjacency matrix of a graph G is a V ⇥ V matrix of 0s and 1s,
normally represented by a two-dimensional array A [1 ... V, 1 ... V], where each entry
indicates whether a particular edge is present in G. Specifically, for all vertices u and
v: if the graph is undirected, then A [u, v] := 1 if and only if uv 2 E, and if the graph is
directed, then A[u, v] := 1 if and only if uv 2 E.
For undirected graphs, the adjacency matrix is always symmetric, meaning A[u, v] =
A[v, u] for all vertices u and v, because uv and vu are just different names for the
same edge, and the diagonal entries A[u, u] are all zeros. For directed graphs, the
adjacency matrix may or may not be symmetric, and the diagonal entries may or may
not be zero. Given an adjacency matrix, we can decide in ⇥ (1) time whether two
vertices are connected by an edge just by looking in the appropriate slot in the matrix.
We can also list all the neighbors of a vertex in ⇥ (V) time by scanning the
corresponding row (or column). This running time is optimal in the worst case, but
even if a vertex has few neighbors, we still must scan the entire row to find them all.
Similarly, adjacency matrices require ⇥ (V2) space, regardless of how many edges
the graphhas, so they are only space-evident for very dense graphs.
13
pathfinding. Empirical research thoroughly examines the performance and provides
experimental verification of the working of the method. The advancement in artificial
intelligence in games and other fields is making the pathfinding problem more
challenging as the resources are utilized in other AI 13 operations like graphics,
player actions, leaving a very small space for running pathfinding search. Also, there
is a pressure of developing a more advanced, fast, and optimal path planning search
with limited resource utilization and minimum time frame
2.2 Research Papers and Thesis Consulted for Path Finding Visualizer
14
3. Sun Yigang, Fu Jie and Zhang Hongying “Research on the Shortest Path
Algorithm of Vehicles Dispatch in Airport Emergency Rescue” 2011 Third
International Conference on Intelligent Human-Machine Systems and
Cybernetics : This paper proposed a hybrid algorithm combining DFS and
BFS in the area of vehicle dispatch , saving rescue time and minimizing losses
in case of accidents..
4. Kairanbay, Magzhan & Mat Jani, Hajar. (2013). A Review and Evaluations of
Shortest Path Algorithms. International Journal of Scientific & Technology
Research. 2. 99-104.: This paper’s main objective is to test the Dijkstra’s
Algorithm, Floyd-Warshall Algorithm, Bellman-Ford Algorithm, and Genetic
Algorithm (GA) in solving the shortest path problem in case of routing
problem in computer networks.
5. Masilo Mapaila “EFFICIENT PATH FINDING FOR TILEBASED 2D
GAMES”: Different educated guesses/heuristics used in an A* (A Star)
algorithm were explored here to target efficient path finding within tile-based
games. Various optimization techniques were listed which made real time path
finding faster and less memory intensive.
15
Figure 16: PathFinding with Different Heuristics
7. Lee W., Lawrence R. (2013) Fast Grid-Based Path Finding for Video
Games. In: Zaïane O.R., Zilles S. (eds) Advances in Artificial Intelligence.
Canadian AI 2013. Lecture Notes in Computer Science, vol 7884. Springer,
Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38457-8_9: Video
games and virtual agents interacted in grid-based path finding with map
sizes and count of agents changing in real time. Evaluated against
benchmarks maps from DBA* Search of Dragon Age game. On analysis 3
% more efficient than PRA* implementation of the benchmark.
8. Red Blob Games “Grid PathFinding Optimizations”
https://www.redblobgames.com/pathfinding/grids/algorithms.html: Presents
different optimizations in A* Search algorithm to improve its efficiency like
Changing the map representation to a grid structure , using hexes, tiles ,
different traversal speed optimizations etc.
16
Figure 19: Optimal Paths in Different Grids
17
or number for the memory usage. Thirdly, the author claims that heuristics take less
time for the final pathfinding but did not provide any data for the time taken by these
heuristics for pre-processing of the map decomposition and distance between
gateways. Whereas, in octile heuristic neither pre-calculation nor decomposing the
map is required. So, from reader’s perspective this paper did not answer all the
questions arising in reader’s mind.
The Breadth First Search algorithm applied to determine the path from the original
location to the destination location in the Cartesian field yields several alternative
solutions based on the tests performed. Thus, the Breadth First Search algorithm
applied to the Cartesian field can produce some optimal solution (shortest path) and
solution which is non-optimal.
The tool for path finding for 2D environments. At present, has limitations and we see
the work presented in the papers as a step towards the development of a complete
tool. The current work focused on static and dynamic environments, and we have
worked with both fixed and dynamic cost environments. Most of the research in
academic AI has been focused on robotics and very little has been done towards their
application in tile-based games. This study bridges that gap and with further research,
it would be possible to develop a complete tool that would be useful in academia and
would certainly benefit the game industry
Hex grids provide a better topological representation than tile or octile grids.
Moreover, for memory constrained domains that necessitate IDA*, the hex grid is the
optimal grid choice. Finally, the implementation of hex grids is made easier with tex
grids.
Grid-based path finding must minimize response time and memory usage. DBA* has
lower suboptimality than other abstraction-based approaches with a faster response
time. It represents a quality balance and integration of the best features of previous
algorithms. Future work includes defining techniques for efficiently updating pre-
computed information to reflect grid changes.
Thus, on comparing different research papers I concluded that most papers had poor
experimental setup as the simulation environments used to test various algorithms
were almost always simple and could not incorporate complex path scenarios. There
was in sufficient data collection and analysis and poor data visualization in these
setups. Only one type of representation ex. Maps/Grids were used which were either
prebuilt game maps or random maps. They were used due to easy availability and
easy generation by random maze generation algorithm; thus, I intend to use recursive
and diagonal maps/grids in my project to solve this issue. It was difficult to compare
results cross papers due to the same reason and thus not enough evidence could be
generated to ensure that the performance that the authors claimed in their setup was
completely true to its use. In papers where the experiment setup was well planned,
they were mostly penned by experienced researchers. The statistical methods used
were mean, median and standard deviation. Other papers were completely theory
based which particularly used heuristics derived from mathematic proofs to
supposedly improve the algorithm but had no experimental evidence as such.
18
Chapter 3
System Development
The approach proposed is A* algorithm with heuristic search, will likely locate the
shortest path solution in a totally quick amount of time and minimal distance. A* set
of rules, a type of knowledgeable search, is widely used for finding the shortest route,
because the place of beginning and finishing point is considered in advance. The A*
algorithm is a refinement of the shortest course set of rules that directs the quest
toward the preferred aim. The preferred purpose of heuristic set of rules is to discover
a most desirable answer where the time or resources are constrained. As illustrated in
fig20[C]. This is later as compared with Dijkstra set of rules which is easy and
wonderful technique for route planning. Dijkstra’s set of rules chooses one with the
minimal value until located the purpose, however the search isn't always over because
it calculates all possible paths from beginning node to the purpose, then choose the
excellent answer by means of comparing which way had the minimum distance.
A dynamic visualization system for algorithm animation should satisfy the following
19
conditions:
• Animation speed - the system should be able to present a good dynamic
visualization.
• Programming effort - it should be easy to write a visualization code.
• Widespread access - it must be easy to run a visualization code without (much of)
downloading and installing software.
Of course, the first condition is the principal one: in many cases, a visualization
involves a continuous transformation of the displayed picture, and it might be
computationally very demanding to deliver at least 20-25 frames per second to
guarantee a smooth animation. A failure in this point would make the system useless.
CSS:CSS stands for cascading Style Sheets which describes how HTML
elements are to be displayed on the screen, pages or other media. It is very
important as it helps with the layout control. CSS helped US make the middle
grid where our starting and end node are displayed and with display of the
wall between them.
Visual Studio Code: Visual studio code is a source code editor developed by
Microsoft for windows, Linux, and macOS
Node.js Library for JS
JS Meter to benchmark visualization parameters
20
Figure 21: Design of PathFinding Visualizer
21
Figure 22: Basic File Structure of the App
22
Figure 59: GENERAL SINGLE SOURCE SHORTEST PATH ALGORITHMS
In all the general single source shortest path algorithms, a common methodology is
followed which consists of initializing all distances with a large value (preferably, inf)
, adding the source node to the data structure in use . Traversing while it is not empty,
to extract the front node of the data structure and visit all its un-visited neighbours one
by one until the end node is encountered. I have utilized an adjacency lists data
structure, consisting of an array consisting of nodes with node storing the list of their
23
Array[i] i.e., is set as unique node id, which used to refer to the neighbours.
1. Breadth First Search:BFS explores the node breadth wise to find all possible
combinations.
24
Figure 25: BFS implementation as a Queue (FIFO)
Pseudo Code
25
}
//We get a path traversing backwards from destination node to source node
}
//If end has no parent , continue traversing for end
26
2. Depth First Search (DFS)
DFS can be used to generate a topological ordering, to generate mazes (cf. DFS
Then, this is not a very good algorithm if the unique purpose is to do a simple
each path before going back. It is the reason why you may also find this
forms. If we consider a tree (which is a simplified graph), the DFS will proceed
as follows:
27
Figure 29: DFS implementation as a Stack (LIFO)
Pseudo Code
28
//If end has no parent , keep looking for the end
}
The recursion of the DFS algorithm stems from the fact that we don’t finish checking a
“parent” node until we reach a dead end, and inevitably pop off one of the “parent”
29
Figure 30: DFS Visualization
3. Dijkstra Algorithm
Dijkstra’s algorithm finds the shortest path from a root node to every other
node (until the target is reached). Dijkstra is one of the most useful graph
Dijkstra’s algorithm uses priority queue (or heap) as the required operations
(extract minimum and decrease key) match with specialty this data structure.
Here the priority is defined by D (the distance between a node and the
root). Higher priorities are given for nodes with lower distance D.
30
Figure 31: Dijkstra Implementation as a Priority Queue
We can visualize it as regular attraction queue in a stall /fair with some people having
31
Figure 32.1: Visualizing Dijkstra Algorithm
Pseudo Code
32
33
Figure 32.2: Dijkstra Code Snippet
4. * (A-Star)
The A* (A-Star) algorithm improves on Dijkstra’s by finding the shortest path more
quickly. It was invented by Peter Hart, Nils Nilsson, and Bertram Raphael (three
American computer scientists) and described in their paper “A Formal Basis for the
34
A* pathfinding algorithm is arguably the best pathfinding algorithm when we have to
find the shortest path between two nodes. A* is the golden ticket, or industry standard,
Dijkstra’s Algorithm works well to find the shortest path, but it wastes time exploring
in directions that aren’t promising. A* improves this by allowing the inclusion of extra
information that the algorithm can use as part of the heuristic function:
— The A* algorithm uses both the actual distance from the root and the estimated
35
This distance is ideal for our mazes that allow 4-way movement (up, down, left,
right).h(n)=|goal.x−root.x|+|goal.y−root.y|
Here are the distance maps computed (Manhattan, Dijkstra and A*):
Pseudo Code
36
For( auto && Node: m Nodes){
Node.distance=inf
Node.root_distance=inf
//Using Manhattan heuristic
Node.manhattan_distance=2*(Math.abs(end_node.x-node.x)+(Math.abs(node_end.y-
node.y))
}
//setting distance to root as zero
Start_node.root_distance=0
Pq.add(start_node,0)
//Traverse until priority queue is empty
While(!pq.empty())
{
Current_node=pq.dequeue()
Current_node.discovered=True
//fetch next closet node and mark it as found
//visiting the unvisited neighbors
For( auto&& Neighbor : current_node.GetUnvisitedNeighbors())
{
//Update the minimal root distance
Neighbor.root_distance=Math.min(Neighbor.root_distance,current_node.root_distanc
e+1)
Const minimum_distance=Math.min(Neighbor.distance,
Neighbor.root_distance+Neighbor.manhattan_distance)
If(minimum_distance !=Neighbor_distance)
{
Neighbor_distance=minimum_distance
Neighbor.parent=current_node
//change the priority in queue of the neighbor
If(pq.has(Neighbor))
Pq.setPriority(Neighbor,minimum_distance)
}
If(!pq.has(Neighbor))
{
Pq.add(Neighbor,Neighbor.distance)}
}
}
//Trace path beginning from destination to root node
}
// If end doesn’t have a parent continue traversing for the end
37
Figure 36: A*Algorithm Visualization
38
starting node is tagged as “closed” or no next node can be found.
39
7) Jump Point Search Algorithm (Refer Appendix)
40
8) Maze Generation Algorithms
41
8.1 Graph theory-based mazes
42
8.2 Randomized depth-first search
The depth-first search algorithm of maze generation is frequently implemented
using backtracking. This can be described with a following recursive routine:
1. Create a list of all walls, and create a set for each cell, each containing
just that one cell.
2. For each wall, in some random order:
1. If the cells divided by this wall belong to distinct sets:
1. Remove the current wall.
2. Join the sets of the formerly divided
cells.
There are several data structures that can be used to model the sets of cells. An
efficient implementation using a disjoint-set data structure can perform each union
and find operation on two sets in nearly constant amortized time so the running time
of this algorithm is essentially proportional to the number of walls available to the
maze.
43
8.4 Random Prim's algorithm
44
8.6 Maze via Recursive division method
Mazes can be created with recursive division, an algorithm which works as follows:
Begin with the maze's space with no walls. Call this a chamber. Divide the chamber
with a randomly positioned wall (or multiple walls) where each wall contains a
randomly positioned passage opening within it. Then recursively repeat the process on
the sub chambers until all chambers are minimum sized. This method results in mazes
with long straight walls crossing their space, making it easier to see which areas to
avoid.
45
Figure 43: Recursive Division Maze
46
Figure 46: Random Maze
47
Recursive Division Code Snippet
48
Simple Stairs Pattern Code Snippet
49
Launch Animation Function Code Snippet
50
Shortest Path Change Function
51
Get Distance Function: To measure distance between coordinates(x1,y1) and (x2,y2)
52
Function to get closest Potential Neighbor Nodes
53
Function to test Different Algorithms (Both weighted and un-weighted) with
given obstacles and animate nodes simultaneously
54
Chapter 4
Performance Analysis
4.1Sample simulations
In these stills from the simulator, small red boxes (1) signify nodes, and blue lines
show the links between the nodes, signifying the entire graph tree. Yellow boxes
(2), slightly larger, signify nodes that have been examined by the search
algorithm, and green boxes (3), slightly larger than the yellow boxes, signify
nodes that were determined to be part of the best path chosen by the algorithm.
Black boxes (not shown) are nodes that were determined to be dead ends. The
read out in the bottom left corner shows how many simulations (of all specified
algorithms) out of the total have been run, followed by the name of the algorithm
being used in the current test. The agent is the penguin model, and the target is the
bull’s-eye texture. In each case, the path calculated by the algorithm has been
shown using a thicker line.
Simulation 1 : Nodes placed at the boundary near the edges and destination
node at the cross section between two paths, The path grid is in form of a
uniform square mesh
55
Figure 51. Testing Tool in Simulation 1, Dijkstra’s algorithm
In these examples from Environment 1, all algorithms found the same path. However,
in the screenshot from the run using Dijkstra’s algorithm, 10 nodes are marked as
examined, including the nodes that are part of the path, as opposed to 5 each in the
runs using A* and D*. Dijkstra’s algorithm examined more nodes than either of the
other algorithms owing to its greedy nature.
56
Simulation 2: The path grid is non-uniform mesh with broken linkages to make
the path more complex, Nodes placed randomly in the mesh and Destination
node at the center of the grid
57
Figure 55. Testing Tool in Simulation 2, D*
We can devise from simulation 2, A* and Dijkstra’s algorithm found the same path,
but Dijkstra’s algorithm examined 16 nodes to find it, while A* examined 5. D*
found a path that traversed fewer nodes than the other path. This path is 53.081 units
long, versus the other path, which is 56.835 units long. D* calculated this path by
working backwards from the target to the start, rather than the other way around.
Simulation 3: Another variant of the broken non-uniform mesh grid with nodes
spread over a large area , Destination node at the center of the grid
58
Figures 57. Testing Tool in Simulation 3, Dijkstra’s algorithm
59
Table 1: Simulation 1 Output
60
Table 3 : Simulation 3 Output
An efficient algorithm is one that calculates the shortest path with the fewest number
of node visitations. A basic “efficiency score”, S, can be calculated for each algorithm
in each environment by multiplying the average number of node visitations by the
average length of a path. Multiply the reciprocal of this number by a constant k to
increase understandability (in this case, k=10000). Therefore, S = k/(V*L), where S is
the efficiency score, V is the average number of node visitation, and L is the average
length of a path in units. A larger value indicates a more efficient algorithm. These
values are specific to the environment and can only be used to compare the efficiency
of different algorithms in the same environment.
In all 6 cases , BFS,DFS were the least efficient, followed by Dijkstra as these
algorithms have no method of cutting down on the search space. As seen in the data
tables above, the average length of a path found by all algorithms was similar, but in
all environments, Dijkstra’s algorithm made far more node visitations during the path
calculation phase than either A* or D*. BFS,DFS performed slightly better than
Dijkstra on this parameter. This led to low performance scores for Dijkstra,BFS,DFS.
D* was by far the most efficient algorithm, with a fewer average node visitations per
61
path. It had the lowest average number of nodes in a path, and shortest average path
length. In case of more obstructed grids , however A* performed better than D*. D*
could determine that no path existed more quickly than A* only in unobstructed pace
where it expanded less rapidly, fewer node visitations.
In open/ far spaced grids A* could either determine more quickly that no path existed.
D* took more time and had to check a greater number of nodes, expanding the search
space more quickly.
Dijkstra, BFS and DFS were the slowest performers on a general basis.
Table 5: Comparison of BFS and Dijkstra on Optimality, Queue type and Time
Complexity
62
Table 7: Comparison of Algorithm Performance
Based on the three simulations I calculated values of V, the average no. of node
visitation and L, the average length of path to get the performance score P which
depicted the efficiency of each algorithm under different environments. In case of
uniform grid with no broken paths i.e., in grid 1 , D* Search had the maximum
efficiency score of 20.748 followed by A* Search of 3.037, approx. seven times
faster. In non-uniform grid 2, A* Search was better than D* Search by approx.
(18.291/7.357) 2.5 times. In grid 3, A* Search again outperformed D*Search by
(17.802/12.684) 1.4 times but this time the difference was much lower. Thus, D*
Search performed best overall but slowed in case the grids were much more
obstructed/broken as it expanded much more rapidly and had more node visitations,
D* Search iterates over all the possible paths between source and destination nodes to
find the most optimal route requiring least amount of time and distance to be
travelled.
On testing above algorithms on metrics of path length, node visitations, node
expansion and computation time , different algorithms excelled on different
parameters, Theta * (A* and D* combination) gave the best path, A* Search had the
best computation time and HPA* although had a high initial computational cost but
evolved to get more effective in the subsequent searches. IDA* performed bad in case
of small search spaces when denoted by octile maps as it expanded too rapidly in time
and distance.
Summarizing the algorithms, time complexity , graphs used and applications:
63
Depth-first search
Breadth-first search
Primarily uses a queue data structure to traverse a tree/ graph.
Dijkstra’s algorithm
A greedy algorithm, it first choses he unvisited vertices having least distance to the
source nodes and subsequently computes distance via it to each and every unvisited
neighbour node and if the distance is smaller than the one being considered it update
64
it, using a priority queue in the process.
65
A* algorithm
With a concave obstacle, A* and Dijkstra’s Algorithm both find almost similar paths:
66
Figure 61: A* Search
The secret to its success is that it combines the pieces of information that Dijkstra’s
Algorithm uses (favouring vertices that are close to the starting point) and information
that Greedy Best-First Search uses (favouring vertices that are close to the goal). In
the standard terminology used when talking about A*, g(n) represents the exact
cost of the path from the starting point to any vertex n, and h(n) represents the
heuristic estimated cost from vertex n to the goal. In the above diagrams, the yellow
(h) represents vertices far from the goal and teal (g) represents vertices far from the
starting point. A* balances the two as it moves from the starting point to the goal.
Each time through the main loop, it examines the vertex n that has the lowest f(n) =
g(n) + h(n).
67
Swarm Algorithm
68
Figure 63: A swarm intelligence graph-based path finding algorithm
69
Chapter 5
Conclusions
5.1 Conclusions
Dijkstra's algorithm is BFS but with a priority queue data structure and failed
in case of negative weights, for which Bellman Ford algorithm is used but
many more algorithms were built on it and it served a s a base for many
others.
DFS keeps traversing nodes along their depth and backtracking until either it
finds a path or reaches a dead end, While Dijkstra is more like a BFS except it
keeps track of weights (not all paths have equal cost) and will keep checking
the shortest path not already visited until it finds the destination node.
Generally, DFS is (usually) the fastest way to find a path but it is not
necessarily shortest and can be implemented very easily with recursion, but
Dijkstra's algorithm is the general way to find the shortest possible path faster.
DFS traverses to the depth and then backtracks to find other possible paths
which may be shorter than the initial paths.
A* and D* both could almost equally determine that no path existed. Dijkstra
on the other hand uses more node visitations to give the same result as it lacks
heuristics and thus needed more computation time expanding search space
unnecessarily , wasting computation power on useless nodes which ultimately
did not lead to the end node.
D* Search was the best algorithm overall in terms of performance but could
not perform well in case of obstructed spaces , where A* Search was better.
BFS, DFS and Dijkstra were slowest performers with DFS not even always
guaranteeing the shortest path. DFS was much more efficient in finding all the
possible paths from source to destination rather than the most optimal route.
70
5.2 Future Scope
Having worked on a 2D grid to visualize all the path finding algorithms in this
project, i plan to extend the scope of this project to a 3D grid, wherein one can find
shortest route between two actual locations in a 3D simulation kind of game. This
would allow us to test these algorithms in a real time environment and help us devise
the most efficient combination considering different terrains, altitudes, and locations.
A blueprint for this concept as follows.
71
affected by changing heuristics. In the case of HPA *, only one batch size (16 by 16)
was used. If the size of the collection is large, the estimated results will likely vary,
especially if the HPA * pre-calculated is offline.
Also, we did not work in dynamic environments with moving targets or moving
obstacles. It was initially discussed to be included but due to time was withdrawn.
Another area to consider is the representation of a three-dimensional map instead of
the two examined in this thesis. The grid used in the maps in this thesis uses only the
octile grid. With more representation of the map the calculated results may be
different.
In this project, multi-threading was not used in computer algorithms. There are
algorithms that can be very useful in string calculation. HPA * can benefit from
calculating its acquisition in collections. Algorithms can be used both on the CPU and
GPU. Due to previous CPU programming knowledge and lack of GPU programming
experience, this project only focuses on CPU usage. One thing that can be further
investigated is the use of algorithms on both the CPU and GPU to compare how they
move.
The major uses of algorithms and methods of finding methods today include:
Commercial Use
There is a small market for robots carrying industrial units, office buildings, and other
workplaces, which saves labor costs otherwise is spent on paying people to carry
goods and these can save people from the monotonous and simple repetitive tasks.
There is also emerging market like Ola, Uber Services which require accurate real
time visualization.
72
Logistic sand Transport Activities
The problem of finding a way encompasses the condition of multidisciplinary
networks, which aims to improve traffic efficiency, accuracy, and cost of the entire
transport network. There are still many challenges to find algorithmic improvements
to meet real-world needs that include multiple goals or multiple goals, multiple
approaches involving very different networks, multiple destinations, time-based
planning, traffic speed, real-time planning. and so on. In real-world use, the
complexity of the economic, social, and environmental environment requires search
results for more than one factor, other than distance. Tasks dealing with single-
purpose problems, defined by shortcut algorithms in previous sections, do not meet
the requirement. Finding multiple conditional methods thus turns into easily
accessible and feasible. Generally, most shortcut algorithms can only deal with a
single network with one condition, far from reality. Even multiple conditioning
solutions are automatically integrated and treated as a single condition in each
network. To date there is no solution for calculating a complete route that connects
two different networks, each with a set of different conditions. For example, the travel
network needs to consider the level of difficulty posed by the slope, the roughness of
the road, and the level of closed lanes to avoid rain, while the traffic network is more
focused on time, distance, and transportation costs. Currently, many commercial and
government applications such as Google, Baidu, Bing tend to offer only one network-
based solution.
With improved transportation and more travel options like rail, road, air etc., there is a
growing need for more information to meet the diverse needs of today's commuters.
To have a complete solution that combines two or more special networks, then the
algorithm needs to consider the dependability of all networks before a better solution
to tackle all related problems can be devised.
73
REFERENCES
1]. Bassat Levy, R. B. and Ben-Ari, M. , ITiCSE ’07, Proceedings of the 12th
annual SIGCSE conference on Innovation and technology in computer science
education, Dundee, Scotland. ACM Press., 2007, pp. 120–150.
[4]. Kai Li LIm, KahPhooi Seng, Lee Seng Yeong., “Uninformed pathfinding: A
new approach, Expert systems with applications”., 2015, pp. 2722-2730.
[10]. Chan, Simon Yew Meng, et al. , “An experiment on the performance of
shortest path algorithm.”, 2016, pp. 7-12.
74
Computer Science(), vol 2338. Springer, Berlin, Heidelberg. Retrieved
fromhttps://doi.org/10.1007/3-540-47922-8_4
[15]. Lee, W., Lawrence, R. , “Fast Grid-Based Path Finding for Video Games.”, In:
Zaïane, O.R., Zilles, S. (eds) Advances in Artificial Intelligence. Canadian AI
2013. Lecture Notes in Computer Science(), vol 7884. Springer, Berlin,
Heidelberg. Retrieved from https://doi.org/10.1007/978-3-642-38457-8_9
[16]. Pan, T.; Pun-Cheng, S.C. A Discussion on the Evolution of the Pathfinding
Algorithms. Preprints 2020, 2020080627 (doi:
10.20944/preprints202008.0627.v1).
[17]. Saif Ulla Shariff, M Ganeshan. , “A Path Finding Visualization Using A Star
Algorithm and Dijkstra’s Algorithm”, in International Journal of Trend in
Scientific Research and Development (IJTSRD) Volume 5 Issue 1, 2020.
[18]. Nishant Kumar and Nidhi Sengar., “Pathfinder Visualizer of Shortest Paths
Algorithms.” , in International Journal for Modern Trends in Science and
Technology., 2020, pp. 6(12): 479-483.
75
Appendix
1. BFS: The process of searching the solution search using the Breadth First
Search algorithm are performed in the following Cartesian case:
76
2.
77
3. Issues in experimental design: It cover the papers with experiments using only
one or two type of maps, three or less map size variations, two or less obstacle
density variation and no obstacle distribution. 2. Issues in data collection: It
cover papers which collected data from 3 or less types of map and variations
or data collected does not provide direct evidence supporting their claims like
data of time consumption indirectly pointing to less memory consumption, no
data for memory consumption. 3. Issues in data analysis: The analysis only
provides average mean, median results and did not provide standard deviation,
variance of the data.
Pseudo-Code of IDA*
78
5. Jump Point Search: Working of JPS
79
Pseudo Code of JPS
80
6.
81