Module VII
Module VII
This module explores advanced topics in algorithms, including Network Flow, Complexity
Classes, Approximation Algorithms, and Randomized Algorithms. These areas are crucial in
theoretical computer science and practical problem-solving.
1. Network Flow
Network Flow problems involve finding the optimal way to send resources through a network
from a source to a sink while respecting capacity constraints on edges.
Max-Flow Problem
The Maximum Flow Problem seeks to maximize the flow from a source node s to a sink node t in
a flow network, where each edge has a capacity limit. The total flow cannot exceed these
capacities.
Ford-Fulkerson Algorithm
The Ford-Fulkerson method computes the maximum flow in a network by repeatedly finding
augmenting paths using a residual network, then augmenting the flow along these paths.
Steps:
1. Start with zero flow.
2. Construct a residual graph, which shows how much more flow can be pushed through each
edge.
3. Find an augmenting path (using BFS or DFS).
4. Update the flow along the path and repeat until no more augmenting paths are found.
from collections import deque # Function to perform BFS to find an augmenting path
def bfs(residual_graph, source, sink, parent): visited = [False] *
len(residual_graph) queue = deque([source]) visited[source] = True while queue: u =
queue.popleft() for v, capacity in enumerate(residual_graph[u]): if not visited[v]
and capacity > 0: queue.append(v) visited[v] = True parent[v] = u if v == sink:
return True return False # Ford-Fulkerson algorithm def ford_fulkerson(graph, source,
sink): residual_graph = [list(row) for row in graph] parent = [-1] * len(graph)
max_flow = 0 while bfs(residual_graph, source, sink, parent): path_flow =
float('Inf') s = sink while s != source: path_flow = min(path_flow,
residual_graph[parent[s]][s]) s = parent[s] max_flow += path_flow # Update residual
capacities of the edges v = sink while v != source: u = parent[v] residual_graph[u]
[v] -= path_flow residual_graph[v][u] += path_flow v = parent[v] return max_flow #
Example usage graph = [[0, 16, 13, 0, 0, 0], [0, 0, 10, 12, 0, 0], [0, 4, 0, 0, 14,
0], [0, 0, 9, 0, 0, 20], [0, 0, 0, 7, 0, 4], [0, 0, 0, 0, 0, 0]] source = 0 sink = 5
print("Maximum Flow:", ford_fulkerson(graph, source, sink))
Applications:
Network routing
Bipartite matching
Airline scheduling
2. Complexity Classes
P (Polynomial Time):
P is the class of decision problems that can be solved in polynomial time.
Examples: Sorting algorithms, shortest paths (Dijkstra's algorithm).
NP-Hard:
A problem is NP-Hard if every problem in NP can be reduced to it in polynomial time.
NP-Hard problems are at least as hard as the hardest problems in NP.
Example: The Traveling Salesman Problem (TSP).
NP-Complete:
A problem is NP-Complete if it is both in NP and NP-Hard.
If one NP-Complete problem can be solved in polynomial time, then every problem in NP can
be solved in polynomial time.
Example: 3-SAT (satisfiability problem), Hamiltonian Cycle.
3-SAT:
3-SAT is a special case of SAT where the boolean formula is expressed as a conjunction of
clauses, each containing exactly three literals.
3-SAT is NP-Complete and frequently used in reductions to prove other problems are NP-
Complete.
Graph Colouring:
The Graph Colouring Problem asks whether the vertices of a graph can be colored using k
colors such that no two adjacent vertices share the same color.
3-Coloring is NP-Complete.
Hamiltonian Cycle:
A Hamiltonian Cycle is a cycle in a graph that visits every vertex exactly once.
Finding a Hamiltonian Cycle is NP-Complete.
3. Approximation Algorithms
Approximation Algorithms are used to find approximate solutions to NP-Hard problems where
finding the exact solution is computationally intractable.
Characteristics:
Provide solutions that are close to the optimal (usually within a guaranteed ratio).
Often used in problems like TSP, Set Cover, and Vertex Cover.
Performance:
Approximation algorithms for TSP are guaranteed to provide a solution within a factor of 1.5 of
the optimal solution for certain graph types (e.g., metric TSP).
4. Randomized Algorithms
Randomized Algorithms make random choices during execution to simplify design or improve
performance. They can be classified into two types:
Las Vegas Algorithms: Always produce correct results but have variable runtimes depending
on randomness.
Monte Carlo Algorithms: Have a probability of producing incorrect results but have a
guaranteed runtime.
Applications:
Randomized QuickSort (better average performance).
Monte Carlo simulations (estimating probabilities).
Randomized algorithms for primality testing (e.g., Miller-Rabin test).
Summary
In this module, we covered:
Network Flow: Finding optimal flow in networks, with applications in scheduling, matching,
and resource allocation.
Complexity Classes: Understanding the classification of problems based on computational
difficulty (P, NP, NP-Complete, NP-Hard).
Approximation Algorithms: Providing near-optimal solutions for NP-Hard problems like TSP.
Randomized Algorithms: Using randomness to improve average-case performance or solve
problems more efficiently.
These concepts are essential for solving real-world computational problems efficiently and
understanding the limitations of algorithmic approaches.