Skip to content

Commit 6eecd4a

Browse files
authored
Merge pull request #1371 from Proxihox/main
Added Simulated Annealing
2 parents a29c713 + a82cd12 commit 6eecd4a

File tree

3 files changed

+205
-0
lines changed

3 files changed

+205
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith
2929

3030
### New articles
3131

32+
- (21 May 2025) [Simulated Annealing](https://cp-algorithms.com/num_methods/simulated_annealing.html)
3233
- (12 July 2024) [Manhattan distance](https://cp-algorithms.com/geometry/manhattan-distance.html)
3334
- (8 June 2024) [Knapsack Problem](https://cp-algorithms.com/dynamic_programming/knapsack.html)
3435
- (28 January 2024) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html)

src/navigation.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@ search:
110110
- [Binary Search](num_methods/binary_search.md)
111111
- [Ternary Search](num_methods/ternary_search.md)
112112
- [Newton's method for finding roots](num_methods/roots_newton.md)
113+
- [Simulated Annealing](num_methods/simulated_annealing.md)
113114
- Integration
114115
- [Integration by Simpson's formula](num_methods/simpson-integration.md)
115116
- Geometry
Lines changed: 203 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
---
2+
tags:
3+
- Original
4+
---
5+
6+
# Simulated Annealing
7+
8+
**Simulated Annealing (SA)** is a randomized algorithm, which approximates the global optimum of a function. It's called a randomized algorithm, because it employs a certain amount of randomness in its search and thus its output can vary for the same input.
9+
10+
## The problem
11+
12+
We are given a function $E(s)$, which calculates the energy of the state $s$. We are tasked with finding the state $s_{best}$ at which $E(s)$ is minimized. **SA** is suited for problems where the states are discrete and $E(s)$ has multiple local minima. We'll take the example of the [Travelling Salesman Problem (TSP)](https://en.wikipedia.org/wiki/Travelling_salesman_problem).
13+
14+
### Travelling Salesman Problem (TSP)
15+
16+
You are given a set of nodes in 2 dimensional space. Each node is characterised by its $x$ and $y$ coordinates. Your task is to find an ordering of the nodes, which will minimise the distance to be travelled when visiting these nodes in that order.
17+
18+
## Motivation
19+
Annealing is a metallurgical process, wherein a material is heated up and allowed to cool, in order to allow the atoms inside to rearrange themselves in an arrangement with minimal internal energy, which in turn causes the material to have different properties. The state is the arrangement of atoms and the internal energy is the function being minimised. We can think of the original state of the atoms, as a local minima for its internal energy. To make the material rearrange its atoms, we need to motivate it to go across a region where its internal energy is not minimised in order to reach the global minima. This motivation is given by heating the material to a higher temperature.
20+
21+
Simulated annealing, literally, simulates this process. We start off with some random state (material) and set a high temperature (heat it up). Now, the algorithm is ready to accept states which have a higher energy than the current state, as it is motivated by the high temperature. This prevents the algorithm from getting stuck inside local minimas and move towards the global minima. As time progresses, the algorithm cools down and refuses the states with higher energy and moves into the closest minima it has found.
22+
23+
### The energy function E(s)
24+
25+
$E(s)$ is the function which needs to be minimised (or maximised). It maps every state to a real number. In the case of TSP, $E(s)$ returns the distance of travelling one full circle in the order of nodes in the state.
26+
27+
### State
28+
29+
The state space is the domain of the energy function, $E(s)$, and a state is any element which belongs to the state space. In the case of TSP, all possible paths that we can take to visit all the nodes is the state space, and any single one of these paths can be considered as a state.
30+
31+
### Neighbouring state
32+
33+
It is a state in the state space which is close to the previous state. This usually means that we can obtain the neighbouring state from the original state using a simple transform. In the case of the Travelling Salesman Problem, a neighbouring state is obtained by randomly choosing 2 nodes, and swapping their positions in the current state.
34+
35+
## Algorithm
36+
37+
We start with a random state $s$. In every step, we choose a neighbouring state $s_{next}$ of the current state $s$. If $E(s_{next}) < E(s)$, then we update $s = s_{next}$. Otherwise, we use a probability acceptance function $P(E(s),E(s_{next}),T)$ which decides whether we should move to $s_{next}$ or stay at $s$. T here is the temperature, which is initially set to a high value and decays slowly with every step. The higher the temperature, the more likely it is to move to $s_{next}$.
38+
At the same time we also keep a track of the best state $s_{best}$ across all iterations. Proceeding till convergence or time runs out.
39+
40+
41+
<center>
42+
<img src="https://upload.wikimedia.org/wikipedia/commons/d/d5/Hill_Climbing_with_Simulated_Annealing.gif" width="800px">
43+
<br>
44+
<i>A visual representation of simulated annealing, searching for the maxima of this function with multiple local maxima.</i>
45+
<br>
46+
</center>
47+
48+
### Temperature(T) and decay(u)
49+
50+
The temperature of the system quantifies the willingness of the algorithm to accept a state with a higher energy. The decay is a constant which quantifies the "cooling rate" of the algorithm. A slow cooling rate (larger $u$) is known to give better results.
51+
52+
## Probability Acceptance Function(PAF)
53+
54+
$P(E,E_{next},T) =
55+
\begin{cases}
56+
\text{True} &\quad\text{if } \mathcal{U}_{[0,1]} \le \exp(-\frac{E_{next}-E}{T}) \\
57+
\text{False} &\quad\text{otherwise}\\
58+
\end{cases}$
59+
60+
Here, $\mathcal{U}_{[0,1]}$ is a continuous uniform random value on $[0,1]$. This function takes in the current state, the next state and the temperature, returning a boolean value, which tells our search whether it should move to $s_{next}$ or stay at $s$. Note that for $E_{next} < E$ , this function will always return True, otherwise it can still make the move with probability $\exp(-\frac{E_{next}-E}{T})$, which corresponds to the [Gibbs measure](https://en.wikipedia.org/wiki/Gibbs_measure).
61+
62+
```cpp
63+
bool P(double E,double E_next,double T,mt19937 rng){
64+
double prob = exp(-(E_next-E)/T);
65+
if(prob > 1) return true;
66+
else{
67+
bernoulli_distribution d(prob);
68+
return d(rng);
69+
}
70+
}
71+
```
72+
## Code Template
73+
74+
```cpp
75+
class state {
76+
public:
77+
state() {
78+
// Generate the initial state
79+
}
80+
state next() {
81+
state s_next;
82+
// Modify s_next to a random neighboring state
83+
return s_next;
84+
}
85+
double E() {
86+
// Implement the energy function here
87+
};
88+
};
89+
90+
91+
pair<double, state> simAnneal() {
92+
state s = state();
93+
state best = s;
94+
double T = 10000; // Initial temperature
95+
double u = 0.995; // decay rate
96+
double E = s.E();
97+
double E_next;
98+
double E_best = E;
99+
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
100+
while (T > 1) {
101+
state next = s.next();
102+
E_next = next.E();
103+
if (P(E, E_next, T, rng)) {
104+
s = next;
105+
if (E_next < E_best) {
106+
best = s;
107+
E_best = E_next;
108+
}
109+
E = E_next;
110+
}
111+
T *= u;
112+
}
113+
return {E_best, best};
114+
}
115+
116+
```
117+
## How to use:
118+
Fill in the state class functions as appropriate. If you are trying to find a global maxima and not a minima, ensure that the $E()$ function returns negative of the function you are maximizing and print $-E_{best}$ in the end. Set the below parameters as per your need.
119+
120+
### Parameters
121+
- $T$ : Initial temperature. Set it to a higher value if you want the search to run for a longer time.
122+
- $u$ : Decay. Decides the rate of cooling. A slower cooling rate (larger value of u) usually gives better results, at the cost of running for a longer time. Ensure $u < 1$.
123+
124+
The number of iterations the loop will run for is given by the expression
125+
126+
$N = \lceil -\log_{u}{T} \rceil$
127+
128+
Tips for choosing $T$ and $u$ : If there are many local minimas and a wide state space, set $u = 0.999$, for a slow cooling rate, which will allow the algorithm to explore more possibilities. On the other hand, if the state space is narrower, $u = 0.99$ should suffice. If you are not sure, play it safe by setting $u = 0.998$ or higher. Calculate the time complexity of a single iteration of the algorithm, and use this to approximate a value of $N$ which will prevent TLE, then use the below formula to obtain $T$.
129+
130+
$T = u^{-N}$
131+
132+
### Example implementation for TSP
133+
```cpp
134+
135+
class state {
136+
public:
137+
vector<pair<int, int>> points;
138+
std::mt19937 mt{ static_cast<std::mt19937::result_type>(
139+
std::chrono::steady_clock::now().time_since_epoch().count()
140+
) };
141+
state() {
142+
points = {%raw%} {{0,0},{2,2},{0,2},{2,0},{0,1},{1,2},{2,1},{1,0}} {%endraw%};
143+
}
144+
state next() {
145+
state s_next;
146+
s_next.points = points;
147+
uniform_int_distribution<> choose(0, points.size()-1);
148+
int a = choose(mt);
149+
int b = choose(mt);
150+
s_next.points[a].swap(s_next.points[b]);
151+
return s_next;
152+
}
153+
154+
double euclidean(pair<int, int> a, pair<int, int> b) {
155+
return hypot(a.first - b.first, a.second - b.second);
156+
}
157+
158+
double E() {
159+
double dist = 0;
160+
int n = points.size();
161+
for (int i = 0;i < n; i++)
162+
dist += euclidean(points[i], points[(i+1)%n]);
163+
return dist;
164+
};
165+
};
166+
167+
int main() {
168+
pair<double, state> res;
169+
res = simAnneal();
170+
double E_best = res.first;
171+
state best = res.second;
172+
cout << "Lenght of shortest path found : " << E_best << "\n";
173+
cout << "Order of points in shortest path : \n";
174+
for(auto x: best.points) {
175+
cout << x.first << " " << x.second << "\n";
176+
}
177+
}
178+
```
179+
180+
## Further modifications to the algorithm:
181+
182+
- Add a time based exit condition to the while loop to prevent TLE
183+
- The decay implemented above is an exponential decay. You can always replace this with a decay function as per your needs.
184+
- The Probability acceptance function given above, prefers accepting states which are lower in energy because of the $E_{next} - E$ factor in the numerator of the exponent. You can simply remove this factor, to make the PAF independent of the difference in energies.
185+
- The effect of the difference in energies, $E_{next} - E$, on the PAF can be increased/decreased by increasing/decreasing the base of the exponent as shown below:
186+
```cpp
187+
bool P(double E, double E_next, double T, mt19937 rng) {
188+
double e = 2; // set e to any real number greater than 1
189+
double prob = pow(e,-(E_next-E)/T);
190+
if (prob > 1)
191+
return true;
192+
else {
193+
bernoulli_distribution d(prob);
194+
return d(rng);
195+
}
196+
}
197+
```
198+
199+
## Problems
200+
201+
- [USACO Jan 2017 - Subsequence Reversal](https://usaco.org/index.php?page=viewproblem2&cpid=698)
202+
- [Deltix Summer 2021 - DIY Tree](https://codeforces.com/contest/1556/problem/H)
203+
- [AtCoder Contest Scheduling](https://atcoder.jp/contests/intro-heuristics/tasks/intro_heuristics_a)

0 commit comments

Comments
 (0)