From 642c00f0235fc2dca7406e552261c874e06fb58f Mon Sep 17 00:00:00 2001 From: Juan Camilo Salazar <63207344+juan-c-s@users.noreply.github.com> Date: Thu, 22 Feb 2024 12:55:40 -0500 Subject: [PATCH 001/324] Update intersecting_segments.md The function does not count all intersections. It only returns a pair of intersecting segment Id's if exist --- src/geometry/intersecting_segments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/geometry/intersecting_segments.md b/src/geometry/intersecting_segments.md index 77416975b..a5eba5f6b 100644 --- a/src/geometry/intersecting_segments.md +++ b/src/geometry/intersecting_segments.md @@ -162,7 +162,7 @@ pair solve(const vector& a) { } ``` -The main function here is `solve()`, which returns the number of found intersecting segments, or $(-1, -1)$, if there are no intersections. +The main function here is `solve()`, which returns the intersecting segments if exists, or $(-1, -1)$, if there are no intersections. Checking for the intersection of two segments is carried out by the `intersect ()` function, using an **algorithm based on the oriented area of the triangle**. From ed9fd1ef2e8f061f68abd11c511c0702a2da59f1 Mon Sep 17 00:00:00 2001 From: Mrunank Mistry <44520744+fork52@users.noreply.github.com> Date: Tue, 20 Feb 2024 22:50:12 +0530 Subject: [PATCH 002/324] Add new related problems to stars_and_bars.md --- src/combinatorics/stars_and_bars.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/combinatorics/stars_and_bars.md b/src/combinatorics/stars_and_bars.md index aa88108c1..caa10e16d 100644 --- a/src/combinatorics/stars_and_bars.md +++ b/src/combinatorics/stars_and_bars.md @@ -77,4 +77,5 @@ See the [Number of upper-bound integer sums](./inclusion-exclusion.md#number-of- * [Codeforces - Array](https://codeforces.com/contest/57/problem/C) * [Codeforces - Kyoya and Coloured Balls](https://codeforces.com/problemset/problem/553/A) - +* [Codeforces - Two Arrays](https://codeforces.com/problemset/problem/1288/C) +* [Codeforces - One-Dimensional Puzzle](https://codeforces.com/contest/1931/problem/G) From c453722db5e3a7c938b760a6e3f12fedf5d92d4f Mon Sep 17 00:00:00 2001 From: Mrunank Mistry <44520744+fork52@users.noreply.github.com> Date: Tue, 20 Feb 2024 23:53:49 +0530 Subject: [PATCH 003/324] Add one more task to stars_and_bars.md --- src/combinatorics/stars_and_bars.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/combinatorics/stars_and_bars.md b/src/combinatorics/stars_and_bars.md index caa10e16d..a8e5c395f 100644 --- a/src/combinatorics/stars_and_bars.md +++ b/src/combinatorics/stars_and_bars.md @@ -77,5 +77,6 @@ See the [Number of upper-bound integer sums](./inclusion-exclusion.md#number-of- * [Codeforces - Array](https://codeforces.com/contest/57/problem/C) * [Codeforces - Kyoya and Coloured Balls](https://codeforces.com/problemset/problem/553/A) +* [Codeforces - Colorful Bricks](https://codeforces.com/contest/1081/problem/C) * [Codeforces - Two Arrays](https://codeforces.com/problemset/problem/1288/C) * [Codeforces - One-Dimensional Puzzle](https://codeforces.com/contest/1931/problem/G) From fbf3a8ec49b22e2233641b73de3e1be5dec642be Mon Sep 17 00:00:00 2001 From: Yan Huang Date: Sat, 17 Feb 2024 14:04:10 -0500 Subject: [PATCH 004/324] Update fft.md Rewording to fix a logical typo. --- src/algebra/fft.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/algebra/fft.md b/src/algebra/fft.md index c77d81a1d..2e6532558 100644 --- a/src/algebra/fft.md +++ b/src/algebra/fft.md @@ -15,8 +15,7 @@ Some researchers attribute the discovery of the FFT to Runge and König in 1924. But actually Gauss developed such a method already in 1805, but never published it. Notice, that the FFT algorithm presented here runs in $O(n \log n)$ time, but it doesn't work for multiplying arbitrary big polynomials with arbitrary large coefficients or for multiplying arbitrary big integers. -It can easily handle polynomials of size $10^5$ with small coefficients, or multiplying two numbers of size $10^6$, but at some point the range and the precision of the used floating point numbers will not no longer be enough to give accurate results. -That is usually enough for solving competitive programming problems, but there are also more complex variations that can perform arbitrary large polynomial/integer multiplications. +It can easily handle polynomials of size $10^5$ with small coefficients, or multiplying two numbers of size $10^6$, which is usually enough for solving competitive programming problems. Beyond the scale of multiplying numbers with $10^6$ bits, the range and precision of the floating point numbers used during the computation will not be enough to give accurate final results, though there are more complex variations that can perform arbitrary large polynomial/integer multiplications. E.g. in 1971 Schönhage and Strasser developed a variation for multiplying arbitrary large numbers that applies the FFT recursively in rings structures running in $O(n \log n \log \log n)$. And recently (in 2019) Harvey and van der Hoeven published an algorithm that runs in true $O(n \log n)$. From fe2a59bfe933ce3b11d656458de07ff0a40827ed Mon Sep 17 00:00:00 2001 From: Yan Huang Date: Sat, 17 Feb 2024 14:06:14 -0500 Subject: [PATCH 005/324] Update bit-manipulation.md --- src/algebra/bit-manipulation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/bit-manipulation.md b/src/algebra/bit-manipulation.md index 0a3b3434e..54c42f4d6 100644 --- a/src/algebra/bit-manipulation.md +++ b/src/algebra/bit-manipulation.md @@ -152,7 +152,7 @@ bool isPowerOfTwo(unsigned int n) { } ``` -### Clear the most-right set bit +### Clear the right-most set bit The expression $n ~\&~ (n-1)$ can be used to turn off the rightmost set bit of a number $n$. This works because the expression $n-1$ flips all bits after the rightmost set bit of $n$, including the rightmost set bit. From 84970580fb41877a53e7a25c304dc14633e11beb Mon Sep 17 00:00:00 2001 From: Dhaval Kumar <56940034+whym1here@users.noreply.github.com> Date: Wed, 14 Feb 2024 12:05:30 +0530 Subject: [PATCH 006/324] Update linear-diophantine-equation.md Adds new practice problem to Linear Diophantine Equation --- src/algebra/linear-diophantine-equation.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/algebra/linear-diophantine-equation.md b/src/algebra/linear-diophantine-equation.md index 1aa88294b..6a23aede8 100644 --- a/src/algebra/linear-diophantine-equation.md +++ b/src/algebra/linear-diophantine-equation.md @@ -220,3 +220,4 @@ If $a < b$, we need to select smallest possible value of $k$. If $a > b$, we nee * [Codeforces - Ebony and Ivory](http://codeforces.com/contest/633/problem/A) * [Codechef - Get AC in one go](https://www.codechef.com/problems/COPR16G) * [LightOj - Solutions to an equation](http://www.lightoj.com/volume_showproblem.php?problem=1306) +* [Atcoder - S = 1](https://atcoder.jp/contests/abc340/tasks/abc340_f) From 5f2c0fcf995eeaa666dfe8c4bb005f8166fa053d Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 21:57:36 +0100 Subject: [PATCH 007/324] Update linear-diophantine-equation.md fix name --- src/algebra/linear-diophantine-equation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/linear-diophantine-equation.md b/src/algebra/linear-diophantine-equation.md index 6a23aede8..047067c59 100644 --- a/src/algebra/linear-diophantine-equation.md +++ b/src/algebra/linear-diophantine-equation.md @@ -220,4 +220,4 @@ If $a < b$, we need to select smallest possible value of $k$. If $a > b$, we nee * [Codeforces - Ebony and Ivory](http://codeforces.com/contest/633/problem/A) * [Codechef - Get AC in one go](https://www.codechef.com/problems/COPR16G) * [LightOj - Solutions to an equation](http://www.lightoj.com/volume_showproblem.php?problem=1306) -* [Atcoder - S = 1](https://atcoder.jp/contests/abc340/tasks/abc340_f) +* [Atcoder - F - S = 1](https://atcoder.jp/contests/abc340/tasks/abc340_f) From e9053233c7371c070b5e61eba38eca0cead6a4af Mon Sep 17 00:00:00 2001 From: Chloe <107125600+chloeimb@users.noreply.github.com> Date: Tue, 13 Feb 2024 06:39:54 -0500 Subject: [PATCH 008/324] Update factorization.md I have made some editorial changes to the article. This should make reading more clear and improve understanding. --- src/algebra/factorization.md | 110 +++++++++++++++++------------------ 1 file changed, 53 insertions(+), 57 deletions(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index c7527feae..891b6f325 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -5,22 +5,22 @@ tags: # Integer factorization -In this article we list several algorithms for factorizing integers, each of them can be both fast and also slow (some slower than others) depending on their input. +In this article we list several algorithms for the factorization of integers, each of which can be either fast or varying levels of slow depending on their input. -Notice, if the number that you want to factorize is actually a prime number, most of the algorithms, especially Fermat's factorization algorithm, Pollard's p-1, Pollard's rho algorithm will run very slow. -So it makes sense to perform a probabilistic (or a fast deterministic) [primality test](primality_tests.md) before trying to factorize the number. +Notice, if the number that you want to factorize is actually a prime number, most of the algorithms will run very slowly. This is especially true forFermat's factorization algorithm, Pollard's p-1. +Therefore, it makes the most sense to perform a probabilistic (or a fast deterministic) [primality test](primality_tests.md) before trying to factorize the number. ## Trial division This is the most basic algorithm to find a prime factorization. We divide by each possible divisor $d$. -We can notice, that it is impossible that all prime factors of a composite number $n$ are bigger than $\sqrt{n}$. +It can be observed that it is impossible for all prime factors of a composite number $n$ to be bigger than $\sqrt{n}$. Therefore, we only need to test the divisors $2 \le d \le \sqrt{n}$, which gives us the prime factorization in $O(\sqrt{n})$. (This is [pseudo-polynomial time](https://en.wikipedia.org/wiki/Pseudo-polynomial_time), i.e. polynomial in the value of the input but exponential in the number of bits of the input.) -The smallest divisor has to be a prime number. -We remove the factor from the number, and repeat the process. +The smallest divisor must be a prime number. +We remove the factored number, and continue the process. If we cannot find any divisor in the range $[2; \sqrt{n}]$, then the number itself has to be prime. ```{.cpp file=factorization_trial_division1} @@ -41,10 +41,9 @@ vector trial_division1(long long n) { ### Wheel factorization This is an optimization of the trial division. -The idea is the following. -Once we know that the number is not divisible by 2, we don't need to check every other even number. +Once we know that the number is not divisible by 2, we don't need to check other even numbers. This leaves us with only $50\%$ of the numbers to check. -After checking 2, we can simply start with 3 and skip every other number. +After checking 2, and determining it is an odd number, we can simply start with 3 and only count other odd numbers. ```{.cpp file=factorization_trial_division2} vector trial_division2(long long n) { @@ -65,17 +64,16 @@ vector trial_division2(long long n) { } ``` -This method can be extended. +This method can be extended further. If the number is not divisible by 3, we can also ignore all other multiples of 3 in the future computations. So we only need to check the numbers $5, 7, 11, 13, 17, 19, 23, \dots$. We can observe a pattern of these remaining numbers. We need to check all numbers with $d \bmod 6 = 1$ and $d \bmod 6 = 5$. So this leaves us with only $33.3\%$ percent of the numbers to check. -We can implement this by checking the primes 2 and 3 first, and then start checking with 5 and alternatively skip 1 or 3 numbers. +We can implement this by checking the primes 2 and 3 first, and upon discovering they are not divisible by said numbers, check 5. -We can extend this even further. Here is an implementation for the prime number 2, 3 and 5. -It's convenient to use an array to store how much we have to skip. +A convenient way to store skippable numbers is with an array. ```{.cpp file=factorization_trial_division3} vector trial_division3(long long n) { @@ -102,13 +100,12 @@ vector trial_division3(long long n) { } ``` -If we extend this further with more primes, we can even reach better percentages. -However, also the skip lists will get a lot bigger. +If we continue exending this method to include even more primes, better percentages can be reached, but the skip lists will become larger. ### Precomputed primes -Extending the wheel factorization with more and more primes will leave exactly the primes to check. -So a good way of checking is just to precompute all prime numbers with the [Sieve of Eratosthenes](sieve-of-eratosthenes.md) until $\sqrt{n}$ and test them individually. +The further we extend the wheel factorization method, we will only be left with prime numbers to check. +A good way of checking this is to precompute all prime numbers with the [Sieve of Eratosthenes](sieve-of-eratosthenes.md) until $\sqrt{n}$, and test them individually. ```{.cpp file=factorization_trial_division4} vector primes; @@ -135,7 +132,7 @@ We can write an odd composite number $n = p \cdot q$ as the difference of two sq $$n = \left(\frac{p + q}{2}\right)^2 - \left(\frac{p - q}{2}\right)^2$$ -Fermat's factorization method tries to exploit the fact, by guessing the first square $a^2$, and check if the remaining part $b^2 = a^2 - n$ is also a square number. +Fermat's factorization method tries to exploit this fact by guessing the first square $a^2$, and checking if the remaining part, $b^2 = a^2 - n$, is also a square number. If it is, then we have found the factors $a - b$ and $a + b$ of $n$. ```cpp @@ -152,12 +149,12 @@ int fermat(int n) { } ``` -Notice, this factorization method can be very fast, if the difference between the two factors $p$ and $q$ is small. +This factorization method can be very fast if the difference between the two factors $p$ and $q$ is small. The algorithm runs in $O(|p - q|)$ time. -However since it is very slow, once the factors are far apart, it is rarely used in practice. +In practice though, this method is rarely used. Once factors become further apart, it is extremely slow. -However there are still a huge number of optimizations for this approach. -E.g. by looking at the squares $a^2$ modulo a fixed small number, you can notice that you don't have to look at certain values $a$ since they cannot produce a square number $a^2 - n$. +However, there are still a large number of optimization options regarding this approach. +By looking at the squares $a^2$ modulo, a fixed small number, it can be observed that certain values don't have to be viewed, $a$ since they cannot produce a square number $a^2 - n$. ## Pollard's $p - 1$ method { data-toc-label="Pollard's method" } @@ -190,11 +187,11 @@ $$M = \prod_{\text{prime } q \le B} q^{\lfloor \log_q B \rfloor}$$ Notice, if $p-1$ divides $M$ for all prime factors $p$ of $n$, then $\gcd(a^M - 1, n)$ will just be $n$. In this case we don't receive a factor. -Therefore we will try to perform the $\gcd$ multiple time, while we compute $M$. +Therefore, we will try to perform the $\gcd$ multiple time, while we compute $M$. Some composite numbers don't have $B$-powersmooth factors for small $B$. -E.g. the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. -We would have to choose $B >= 190~753$ to factorize the number. +Meaning, the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. +We will have to choose $B >= 190~753$ to factorize the number. In the following implementation we start with $B = 10$ and increase $B$ after each each iteration. @@ -228,55 +225,55 @@ long long pollards_p_minus_1(long long n) { ``` -Notice, this is a probabilistic algorithm. -It can happen that the algorithm doesn't find a factor. +Observe that this is a probabilistic algorithm. +A consequence of this is that there is a possibility of the algorithm being unable to find a factor at all. The complexity is $O(B \log B \log^2 n)$ per iteration. ## Pollard's rho algorithm -Another factorization algorithm from John Pollard. +Pollard's Rho Algorithm is yet another factorization algorithm from John Pollard. Let the prime factorization of a number be $n = p q$. The algorithm looks at a pseudo-random sequence $\{x_i\} = \{x_0,~f(x_0),~f(f(x_0)),~\dots\}$ where $f$ is a polynomial function, usually $f(x) = (x^2 + c) \bmod n$ is chosen with $c = 1$. -Actually we are not very interested in the sequence $\{x_i\}$, we are more interested in the sequence $\{x_i \bmod p\}$. -Since $f$ is a polynomial function and all the values are in the range $[0;~p)$ this sequence will begin to cycle sooner or later. -The **birthday paradox** actually suggests, that the expected number of elements is $O(\sqrt{p})$ until the repetition starts. -If $p$ is smaller than $\sqrt{n}$, the repetition will start very likely in $O(\sqrt[4]{n})$. +In this instance, we are not interested in the sequence $\{x_i\}$. +We are more interested in the sequence $\{x_i \bmod p\}$. +Since $f$ is a polynomial function, and all the values are in the range $[0;~p)$, this sequence will eventually begin to cycle. +The **birthday paradox** actually suggests that the expected number of elements is $O(\sqrt{p})$ until the repetition starts. +If $p$ is smaller than $\sqrt{n}$, the repetition will likely start in $O(\sqrt[4]{n})$. Here is a visualization of such a sequence $\{x_i \bmod p\}$ with $n = 2206637$, $p = 317$, $x_0 = 2$ and $f(x) = x^2 + 1$. From the form of the sequence you can see very clearly why the algorithm is called Pollard's $\rho$ algorithm.
![Pollard's rho visualization](pollard_rho.png)
-There is still one big open question. -We don't know $p$ yet, so how can we argue about the sequence $\{x_i \bmod p\}$? +Yet, there is still an open question. +If don't know $p$ yet, how can we argue the sequence $\{x_i \bmod p\}$? It's actually quite easy. There is a cycle in the sequence $\{x_i \bmod p\}_{i \le j}$ if and only if there are two indices $s, t \le j$ such that $x_s \equiv x_t \bmod p$. This equation can be rewritten as $x_s - x_t \equiv 0 \bmod p$ which is the same as $p ~|~ \gcd(x_s - x_t, n)$. Therefore, if we find two indices $s$ and $t$ with $g = \gcd(x_s - x_t, n) > 1$, we have found a cycle and also a factor $g$ of $n$. -Notice that it is possible that $g = n$. -In this case we haven't found a proper factor, and we have to repeat the algorithm with different parameter (different starting value $x_0$, different constant $c$ in the polynomial function $f$). +It is possible that $g = n$. +In this case we haven't found a proper factor, so we must repeat the algorithm with a different parameter (different starting value $x_0$, different constant $c$ in the polynomial function $f$). To find the cycle, we can use any common cycle detection algorithm. ### Floyd's cycle-finding algorithm -This algorithm finds a cycle by using two pointers. -These pointers move over the sequence at different speeds. -In each iteration the first pointer advances to the next element, but the second pointer advances two elements. -It's not hard to see, that if there exists a cycle, the second pointer will make at least one full cycle and then meet the first pointer during the next few cycle loops. +This algorithm finds a cycle by using two pointers moving over the sequence at differing speeds. +During each iteration, the first pointer will advance one element over, while the second pointer advances to every other element. +Using this idea it is easy to observe that if there is a cycle, at some point the second pointer will come around to meet the first one during the loops. If the cycle length is $\lambda$ and the $\mu$ is the first index at which the cycle starts, then the algorithm will run in $O(\lambda + \mu)$ time. -This algorithm is also known as the [Tortoise and Hare algorithm](../others/tortoise_and_hare.md), based on the tale in which a tortoise (here a slow pointer) and a hare (here a faster pointer) make a race. +This algorithm is also known as the [Tortoise and Hare algorithm](../others/tortoise_and_hare.md), based on the tale in which a tortoise (the slow pointer) and a hare (the faster pointer) have a race. -It is actually possible to determine the parameter $\lambda$ and $\mu$ using this algorithm (also in $O(\lambda + \mu)$ time and $O(1)$ space), but here is just the simplified version for finding the cycle at all. -The algorithm and returns true as soon as it detects a cycle. -If the sequence doesn't have a cycle, then the function will never stop. -However this cannot happen during Pollard's rho algorithm. +It is actually possible to determine the parameter $\lambda$ and $\mu$ using this algorithm (also in $O(\lambda + \mu)$ time and $O(1)$ space). +When a cycle is detected, the algorithm will return 'True'. +If the sequence doesn't have a cycle, then the function will loop endlessly. +However, using Pollard's Rho Algorithm, this can be prevented. ```text function floyd(f, x0): @@ -290,8 +287,8 @@ function floyd(f, x0): ### Implementation -First here is a implementation using the **Floyd's cycle-finding algorithm**. -The algorithm runs (usually) in $O(\sqrt[4]{n} \log(n))$ time. +First, here is an implementation using the **Floyd's cycle-finding algorithm**. +The algorithm generally runs in $O(\sqrt[4]{n} \log(n))$ time. ```{.cpp file=pollard_rho} long long mult(long long a, long long b, long long mod) { @@ -353,16 +350,15 @@ long long mult(long long a, long long b, long long mod) { Alternatively you can also implement the [Montgomery multiplication](montgomery_multiplication.md). -As already noticed above: if $n$ is composite and the algorithm returns $n$ as factor, you have to repeat the procedure with different parameter $x_0$ and $c$. +As stated previously, if $n$ is composite and the algorithm returns $n$ as factor, you have to repeat the procedure with different parameters $x_0$ and $c$. E.g. the choice $x_0 = c = 1$ will not factor $25 = 5 \cdot 5$. -The algorithm will just return $25$. -However the choice $x_0 = 1$, $c = 2$ will factor it. +The algorithm will return $25$. +However, the choice $x_0 = 1$, $c = 2$ will factor it. ### Brent's algorithm -Brent uses a similar algorithm as Floyd. -It also uses two pointer. -But instead of advancing the pointers by one and two respectably, we advance them in powers of two. +Brent implements a similar method to Floyd, using two pointers. +The difference being that instead of advancing the pointers by one and two places respectively, they are advanced by powers of two. As soon as $2^i$ is greater than $\lambda$ and $\mu$, we will find the cycle. ```text @@ -380,12 +376,12 @@ function floyd(f, x0): return true ``` -Brent's algorithm also runs in linear time, but is usually faster than Floyd's algorithm, since it uses less evaluations of the function $f$. +Brent's algorithm also runs in linear time, but is generally faster than Floyd's, since it uses less evaluations of the function $f$. ### Implementation -The straightforward implementation using Brent's algorithms can be speeded up by noticing, that we can omit the terms $x_l - x_k$ if $k < \frac{3 \cdot l}{2}$. -Also, instead of performing the $\gcd$ computation at every step, we multiply the terms and do it every few steps and backtrack if we overshoot. +The straightforward implementation of Brent's algorithm can be sped up by omitting the terms $x_l - x_k$ if $k < \frac{3 \cdot l}{2}$. +In addition, instead of performing the $\gcd$ computation at every step, we multiply the terms every few steps and backtrack if overshot. ```{.cpp file=pollard_rho_brent} long long brent(long long n, long long x0=2, long long c=1) { @@ -422,7 +418,7 @@ long long brent(long long n, long long x0=2, long long c=1) { } ``` -The combination of a trial division for small prime numbers together with Brent's version of Pollard's rho algorithm will make a very powerful factorization algorithm. +The combination of a trial division for small prime numbers together with Brent's version of Pollard's rho algorithm makes a very powerful factorization algorithm. ## Practice Problems From db65ec7bd082e5612932d90a669de45a1ebd16ef Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 21:59:05 +0100 Subject: [PATCH 009/324] Update factorization.md missed space --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 891b6f325..6ae59995e 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -7,7 +7,7 @@ tags: In this article we list several algorithms for the factorization of integers, each of which can be either fast or varying levels of slow depending on their input. -Notice, if the number that you want to factorize is actually a prime number, most of the algorithms will run very slowly. This is especially true forFermat's factorization algorithm, Pollard's p-1. +Notice, if the number that you want to factorize is actually a prime number, most of the algorithms will run very slowly. This is especially true for Fermat's factorization algorithm, Pollard's p-1. Therefore, it makes the most sense to perform a probabilistic (or a fast deterministic) [primality test](primality_tests.md) before trying to factorize the number. ## Trial division From 0410f4c6baa3875f4811ef459eba6581e1d7fe9f Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:00:06 +0100 Subject: [PATCH 010/324] Update factorization.md add Pollard's rho --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 6ae59995e..9978a850b 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -7,7 +7,7 @@ tags: In this article we list several algorithms for the factorization of integers, each of which can be either fast or varying levels of slow depending on their input. -Notice, if the number that you want to factorize is actually a prime number, most of the algorithms will run very slowly. This is especially true for Fermat's factorization algorithm, Pollard's p-1. +Notice, if the number that you want to factorize is actually a prime number, most of the algorithms will run very slowly. This is especially true for Fermat's, Pollard's p-1 and Pollard's rho factorization algorithms. Therefore, it makes the most sense to perform a probabilistic (or a fast deterministic) [primality test](primality_tests.md) before trying to factorize the number. ## Trial division From a7087312e73327a24a9fc4fa7520dd2bc71f0315 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:06:07 +0100 Subject: [PATCH 011/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 9978a850b..3d8c08b0f 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -70,7 +70,7 @@ So we only need to check the numbers $5, 7, 11, 13, 17, 19, 23, \dots$. We can observe a pattern of these remaining numbers. We need to check all numbers with $d \bmod 6 = 1$ and $d \bmod 6 = 5$. So this leaves us with only $33.3\%$ percent of the numbers to check. -We can implement this by checking the primes 2 and 3 first, and upon discovering they are not divisible by said numbers, check 5. +We can implement this by checking the primes 2 and 3 first, and upon discovering they are not divisible by said numbers, start with 5 and only count remainders $1$ and $5$ modulo $6$. Here is an implementation for the prime number 2, 3 and 5. A convenient way to store skippable numbers is with an array. From 3550533e1d6078177ce97e040bd6d2b2acbee576 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:08:01 +0100 Subject: [PATCH 012/324] Update factorization.md clarifications --- src/algebra/factorization.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 3d8c08b0f..4d0aaf846 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -43,7 +43,7 @@ vector trial_division1(long long n) { This is an optimization of the trial division. Once we know that the number is not divisible by 2, we don't need to check other even numbers. This leaves us with only $50\%$ of the numbers to check. -After checking 2, and determining it is an odd number, we can simply start with 3 and only count other odd numbers. +After factoring out 2, and getting an odd number, we can simply start with 3 and only count other odd numbers. ```{.cpp file=factorization_trial_division2} vector trial_division2(long long n) { @@ -70,10 +70,10 @@ So we only need to check the numbers $5, 7, 11, 13, 17, 19, 23, \dots$. We can observe a pattern of these remaining numbers. We need to check all numbers with $d \bmod 6 = 1$ and $d \bmod 6 = 5$. So this leaves us with only $33.3\%$ percent of the numbers to check. -We can implement this by checking the primes 2 and 3 first, and upon discovering they are not divisible by said numbers, start with 5 and only count remainders $1$ and $5$ modulo $6$. +We can implement this by factoring out the primes 2 and 3 first, after which we start with 5 and only count remainders $1$ and $5$ modulo $6$. Here is an implementation for the prime number 2, 3 and 5. -A convenient way to store skippable numbers is with an array. +It is convenient to store skippable numbers in an array. ```{.cpp file=factorization_trial_division3} vector trial_division3(long long n) { From 913fe1fd1fb052119af4744dc7f0e2e4ba500aaf Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:09:20 +0100 Subject: [PATCH 013/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 4d0aaf846..e37fde25c 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -73,7 +73,7 @@ So this leaves us with only $33.3\%$ percent of the numbers to check. We can implement this by factoring out the primes 2 and 3 first, after which we start with 5 and only count remainders $1$ and $5$ modulo $6$. Here is an implementation for the prime number 2, 3 and 5. -It is convenient to store skippable numbers in an array. +It is convenient to store the sizes of skipping increments in an array. ```{.cpp file=factorization_trial_division3} vector trial_division3(long long n) { From 12afd6bd64835730fc5541455376ad70179bedfe Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:11:14 +0100 Subject: [PATCH 014/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index e37fde25c..78f976b27 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -73,7 +73,7 @@ So this leaves us with only $33.3\%$ percent of the numbers to check. We can implement this by factoring out the primes 2 and 3 first, after which we start with 5 and only count remainders $1$ and $5$ modulo $6$. Here is an implementation for the prime number 2, 3 and 5. -It is convenient to store the sizes of skipping increments in an array. +It is convenient to store the skipping strides in an array. ```{.cpp file=factorization_trial_division3} vector trial_division3(long long n) { From 51fb558d7f188a62e8e06aba8ada17ff8b1bad2e Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:13:03 +0100 Subject: [PATCH 015/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 78f976b27..202e3b426 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -104,7 +104,7 @@ If we continue exending this method to include even more primes, better percenta ### Precomputed primes -The further we extend the wheel factorization method, we will only be left with prime numbers to check. +Extending the wheel factorization method indefinitely, we will only be left with prime numbers to check. A good way of checking this is to precompute all prime numbers with the [Sieve of Eratosthenes](sieve-of-eratosthenes.md) until $\sqrt{n}$, and test them individually. ```{.cpp file=factorization_trial_division4} From 7cead96f8d382090d1318ee5a066fa146d86946f Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:15:55 +0100 Subject: [PATCH 016/324] Update factorization.md style --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 202e3b426..6670d87d3 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -154,7 +154,7 @@ The algorithm runs in $O(|p - q|)$ time. In practice though, this method is rarely used. Once factors become further apart, it is extremely slow. However, there are still a large number of optimization options regarding this approach. -By looking at the squares $a^2$ modulo, a fixed small number, it can be observed that certain values don't have to be viewed, $a$ since they cannot produce a square number $a^2 - n$. +By looking at the squares $a^2$ modulo a fixed small number, it can be observed that certain values $a$ don't have to be viewed, since they cannot produce a square number $a^2 - n$. ## Pollard's $p - 1$ method { data-toc-label="Pollard's method" } From 0dad30cbd71c128332c2333588d72e6f7911a336 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:17:05 +0100 Subject: [PATCH 017/324] Update factorization.md e.g. = for example, not meaning --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 6670d87d3..0a9723518 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -190,7 +190,7 @@ In this case we don't receive a factor. Therefore, we will try to perform the $\gcd$ multiple time, while we compute $M$. Some composite numbers don't have $B$-powersmooth factors for small $B$. -Meaning, the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. +For example, the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. We will have to choose $B >= 190~753$ to factorize the number. In the following implementation we start with $B = 10$ and increase $B$ after each each iteration. From 5acf2fbc353e212f6c4add289130c7963fffc60c Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:18:56 +0100 Subject: [PATCH 018/324] Update factorization.md begin to cycle -> converge into a loop --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 0a9723518..b7abe8a7f 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -239,7 +239,7 @@ The algorithm looks at a pseudo-random sequence $\{x_i\} = \{x_0,~f(x_0),~f(f(x_ In this instance, we are not interested in the sequence $\{x_i\}$. We are more interested in the sequence $\{x_i \bmod p\}$. -Since $f$ is a polynomial function, and all the values are in the range $[0;~p)$, this sequence will eventually begin to cycle. +Since $f$ is a polynomial function, and all the values are in the range $[0;~p)$, this sequence will eventually converge into a loop. The **birthday paradox** actually suggests that the expected number of elements is $O(\sqrt{p})$ until the repetition starts. If $p$ is smaller than $\sqrt{n}$, the repetition will likely start in $O(\sqrt[4]{n})$. From a34eb8ed052f49f19176af180a2ae8df42c36c67 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:21:46 +0100 Subject: [PATCH 019/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index b7abe8a7f..1a406e5b7 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -249,7 +249,7 @@ From the form of the sequence you can see very clearly why the algorithm is call
![Pollard's rho visualization](pollard_rho.png)
Yet, there is still an open question. -If don't know $p$ yet, how can we argue the sequence $\{x_i \bmod p\}$? +How can we exploit the properties of the sequence $\{x_i \bmod p\}$ to our advantage without even knowing the number $p$ itself? It's actually quite easy. There is a cycle in the sequence $\{x_i \bmod p\}_{i \le j}$ if and only if there are two indices $s, t \le j$ such that $x_s \equiv x_t \bmod p$. From a85b56d0ffee99978296096316260124e251d5da Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 23 Feb 2024 22:31:31 +0100 Subject: [PATCH 020/324] Update factorization.md clarification --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 1a406e5b7..cb85edacf 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -381,7 +381,7 @@ Brent's algorithm also runs in linear time, but is generally faster than Floyd's ### Implementation The straightforward implementation of Brent's algorithm can be sped up by omitting the terms $x_l - x_k$ if $k < \frac{3 \cdot l}{2}$. -In addition, instead of performing the $\gcd$ computation at every step, we multiply the terms every few steps and backtrack if overshot. +In addition, instead of performing the $\gcd$ computation at every step, we multiply the terms and only actually check $\gcd$ every few steps and backtrack if overshot. ```{.cpp file=pollard_rho_brent} long long brent(long long n, long long x0=2, long long c=1) { From 42ed8ee0c513564dc3ee5f3949762b3e6f449a32 Mon Sep 17 00:00:00 2001 From: Siddharth <32169520+siddharthabhi30@users.noreply.github.com> Date: Sun, 11 Feb 2024 21:30:08 +0530 Subject: [PATCH 021/324] Update primality_tests.md Adding questions related to primality test. --- src/algebra/primality_tests.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/algebra/primality_tests.md b/src/algebra/primality_tests.md index 003842d3a..eba5b1887 100644 --- a/src/algebra/primality_tests.md +++ b/src/algebra/primality_tests.md @@ -217,4 +217,5 @@ However, since these numbers (except 2) are not prime, you need to check additio ## Practice Problems -- [SPOJ - Prime or Not](https://www.spoj.com/problems/PON/) \ No newline at end of file +- [SPOJ - Prime or Not](https://www.spoj.com/problems/PON/) +- [Project euler - Investigating a Prime Pattern](https://projecteuler.net/problem=146) From f405142a0917aab6e2b996186f05c8e4eda1f65c Mon Sep 17 00:00:00 2001 From: JustAnAverageGuy <68919330+JustAnAverageGuy@users.noreply.github.com> Date: Sat, 10 Feb 2024 13:43:33 +0530 Subject: [PATCH 022/324] Fix typo in longest_increasing_subsequence.md --- src/sequences/longest_increasing_subsequence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/sequences/longest_increasing_subsequence.md b/src/sequences/longest_increasing_subsequence.md index 754a258a6..c72e933a8 100644 --- a/src/sequences/longest_increasing_subsequence.md +++ b/src/sequences/longest_increasing_subsequence.md @@ -216,7 +216,7 @@ We now make two important observations. 1. The array $d$ will always be sorted: $d[l-1] < d[l]$ for all $i = 1 \dots n$. - This is trivial, as you can just remove the last element from the increasing subsequence of length $l$, and you get a increasing subsequence of length $l-1$ with a smalller ending number. + This is trivial, as you can just remove the last element from the increasing subsequence of length $l$, and you get a increasing subsequence of length $l-1$ with a smaller ending number. 2. The element $a[i]$ will only update at most one value $d[l]$. From 47b4788e9f1c136c4d08ce325cdc3ac63df82e09 Mon Sep 17 00:00:00 2001 From: algorithm-apprentice Date: Sat, 6 Jan 2024 12:53:25 +0800 Subject: [PATCH 023/324] Simplify push-relabel-faster's final step --- src/graph/push-relabel-faster.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/src/graph/push-relabel-faster.md b/src/graph/push-relabel-faster.md index 0c000d207..24b9e0ce4 100644 --- a/src/graph/push-relabel-faster.md +++ b/src/graph/push-relabel-faster.md @@ -91,9 +91,6 @@ int max_flow(int s, int t) } } - int max_flow = 0; - for (int i = 0; i < n; i++) - max_flow += flow[i][t]; - return max_flow; + return excess[t]; } ``` From a418de6a6667c5c6ff18b6c3d2e5e79b14478ebf Mon Sep 17 00:00:00 2001 From: Ting-Hsuan Huang Date: Wed, 7 Feb 2024 18:57:16 +0800 Subject: [PATCH 024/324] Add practice problems for mcmf --- src/graph/min_cost_flow.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/graph/min_cost_flow.md b/src/graph/min_cost_flow.md index 1d15210af..7e1e86da4 100644 --- a/src/graph/min_cost_flow.md +++ b/src/graph/min_cost_flow.md @@ -160,3 +160,9 @@ int min_cost_flow(int N, vector edges, int K, int s, int t) { return cost; } ``` + +## Practice Problems + +* [CSES - Task Assignment](Task Assignment) +* [CSES - Grid Puzzle II](https://cses.fi/problemset/task/2131) +* [AtCoder - Dream Team](https://atcoder.jp/contests/abc247/tasks/abc247_g) From 29a27442e1af62dcd58f8504a39517c41626de0f Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Sat, 24 Feb 2024 19:20:48 +0100 Subject: [PATCH 025/324] Update min_cost_flow.md proper link to task assignment --- src/graph/min_cost_flow.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/graph/min_cost_flow.md b/src/graph/min_cost_flow.md index 7e1e86da4..b13716d40 100644 --- a/src/graph/min_cost_flow.md +++ b/src/graph/min_cost_flow.md @@ -163,6 +163,6 @@ int min_cost_flow(int N, vector edges, int K, int s, int t) { ## Practice Problems -* [CSES - Task Assignment](Task Assignment) +* [CSES - Task Assignment](https://cses.fi/problemset/task/2129) * [CSES - Grid Puzzle II](https://cses.fi/problemset/task/2131) * [AtCoder - Dream Team](https://atcoder.jp/contests/abc247/tasks/abc247_g) From 30dc766167eb2441016402bb33b563ca63f7436c Mon Sep 17 00:00:00 2001 From: nabil-hfz <44735062+nabil-hfz@users.noreply.github.com> Date: Tue, 30 Jan 2024 00:52:41 +0200 Subject: [PATCH 026/324] Update aho_corasick.md I have corrected a tiny mistake `a output` to `an output` --- src/string/aho_corasick.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/string/aho_corasick.md b/src/string/aho_corasick.md index 75986a3c7..9074a9251 100644 --- a/src/string/aho_corasick.md +++ b/src/string/aho_corasick.md @@ -101,7 +101,7 @@ More precisely, suppose we are in a state corresponding to a string $t$, and we If there is an edge labeled with this letter $c$, then we can simply go over this edge, and get the vertex corresponding to $t + c$. If there is no such edge, since we want to maintain the invariant that the current state is the longest partial match in the processed string, we must find the longest string in the trie that's a proper suffix of the string $t$, and try to perform a transition from there. -For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is a $\text{output}$. +For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is an $\text{output}$. To transition with the letter $c$, we are forced to go to the state corresponding to the string $b$, and from there follow the edge with the letter $c$.
From 525c3af2a2944e5ab186f414897dc22d2b4e8de5 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Sat, 24 Feb 2024 19:24:11 +0100 Subject: [PATCH 027/324] Update aho_corasick.md clarification --- src/string/aho_corasick.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/string/aho_corasick.md b/src/string/aho_corasick.md index 9074a9251..df44c5e7c 100644 --- a/src/string/aho_corasick.md +++ b/src/string/aho_corasick.md @@ -101,7 +101,7 @@ More precisely, suppose we are in a state corresponding to a string $t$, and we If there is an edge labeled with this letter $c$, then we can simply go over this edge, and get the vertex corresponding to $t + c$. If there is no such edge, since we want to maintain the invariant that the current state is the longest partial match in the processed string, we must find the longest string in the trie that's a proper suffix of the string $t$, and try to perform a transition from there. -For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is an $\text{output}$. +For example, let the trie be constructed by the strings $ab$ and $bc$, and we are currently at the vertex corresponding to $ab$, which is also an $\text{output}$ vertex. To transition with the letter $c$, we are forced to go to the state corresponding to the string $b$, and from there follow the edge with the letter $c$.
From c3250a25bc42b37ec22233d539181b1f3fc04e30 Mon Sep 17 00:00:00 2001 From: "Abd El-Twab M. Fakhry" Date: Sat, 10 Feb 2024 18:56:28 +0200 Subject: [PATCH 028/324] Handle the degenerate case of two identical points. This fix addresses the situation where the convex hull consists of two identical points and the hull is strictly convex (i.e., collinear points are not included). --- src/geometry/convex-hull.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/geometry/convex-hull.md b/src/geometry/convex-hull.md index c2d4da2bb..962962107 100644 --- a/src/geometry/convex-hull.md +++ b/src/geometry/convex-hull.md @@ -86,6 +86,9 @@ void convex_hull(vector& a, bool include_collinear = false) { st.push_back(a[i]); } + if (include_collinear == false && st.size() == 2 && st[0] == st[1]) + st.pop_back(); + a = st; } ``` From 99f5ca34ab9c142fb577768a7fc3b98c75963f98 Mon Sep 17 00:00:00 2001 From: Mark Moretto <25012707+MarkMoretto@users.noreply.github.com> Date: Mon, 29 Jan 2024 09:47:56 -0500 Subject: [PATCH 029/324] Update push-relabel.md Minor formatting corrections. --- src/graph/push-relabel.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/src/graph/push-relabel.md b/src/graph/push-relabel.md index c1e75b83c..569a1198a 100644 --- a/src/graph/push-relabel.md +++ b/src/graph/push-relabel.md @@ -101,8 +101,7 @@ vector> capacity, flow; vector height, excess, seen; queue excess_vertices; -void push(int u, int v) -{ +void push(int u, int v) { int d = min(excess[u], capacity[u][v] - flow[u][v]); flow[u][v] += d; flow[v][u] -= d; @@ -112,8 +111,7 @@ void push(int u, int v) excess_vertices.push(v); } -void relabel(int u) -{ +void relabel(int u) { int d = inf; for (int i = 0; i < n; i++) { if (capacity[u][i] - flow[u][i] > 0) @@ -123,8 +121,7 @@ void relabel(int u) height[u] = d + 1; } -void discharge(int u) -{ +void discharge(int u) { while (excess[u] > 0) { if (seen[u] < n) { int v = seen[u]; @@ -139,8 +136,7 @@ void discharge(int u) } } -int max_flow(int s, int t) -{ +int max_flow(int s, int t) { height.assign(n, 0); height[s] = n; flow.assign(n, vector(n, 0)); From 941d6b1369afe8bfd346a9f0db450f6849c8a6cf Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Sat, 24 Feb 2024 19:34:32 +0100 Subject: [PATCH 030/324] add operator == to pt --- test/data/convex_hull.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/test/data/convex_hull.h b/test/data/convex_hull.h index ad23ddbf9..d3e4ae8f2 100644 --- a/test/data/convex_hull.h +++ b/test/data/convex_hull.h @@ -1,4 +1,11 @@ -struct pt { double x, y; }; +struct pt { + double x, y; + pt() {} + pt(double x, double y): x(x), y(y) {} + bool operator == (pt const& t) const { + return x == t.x && y == t.y; + } +}; struct ConvexHull { vector points; From e9258b9125b7e1adaa8e92bd4b73307d2d15b000 Mon Sep 17 00:00:00 2001 From: Oleh Prypin Date: Thu, 1 Feb 2024 22:21:18 +0100 Subject: [PATCH 031/324] Safely interpolate the page title into the "report issue" button * Currently it is possible that the interpolation escapes the confines of the quotes * Also in a future version of MkDocs, values will become escaped by default, so this value will become incorrect under the current approach --- src/overrides/partials/content.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/overrides/partials/content.html b/src/overrides/partials/content.html index 853a4201a..734db4c21 100644 --- a/src/overrides/partials/content.html +++ b/src/overrides/partials/content.html @@ -1,7 +1,7 @@ " } -If we need all all the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient. +If we need the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient. We can use the same idea as the [Sieve of Eratosthenes](sieve-of-eratosthenes.md). It is still based on the property shown above, but instead of updating the temporary result for each prime factor for each number, we find all prime numbers and for each one update the temporary results of all numbers that are divisible by that prime number. From e0b8036b8da74e15a98bdc3637481dfe32a477cb Mon Sep 17 00:00:00 2001 From: Vadym Tyshchenko Date: Mon, 6 Jan 2025 23:39:08 +0100 Subject: [PATCH 279/324] Update current year in mkdocs.yml (#1409) Update current year to 2025 in mkdocs.yml, which appears in a footer. --- mkdocs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mkdocs.yml b/mkdocs.yml index 067126929..ed15e7068 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -29,7 +29,7 @@ theme: repo_url: https://github.com/cp-algorithms/cp-algorithms repo_name: cp-algorithms/cp-algorithms edit_uri: edit/main/src/ -copyright: Text is available under the Creative Commons Attribution Share Alike 4.0 International License
Copyright © 2014 - 2024 by cp-algorithms contributors +copyright: Text is available under the Creative Commons Attribution Share Alike 4.0 International License
Copyright © 2014 - 2025 by cp-algorithms contributors extra_javascript: - javascript/config.js - https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js?features=es6 From a5953a55dd063a9aa923ccf3aac7d49b9cd4b442 Mon Sep 17 00:00:00 2001 From: Thomas Zeger Date: Mon, 6 Jan 2025 17:43:17 -0500 Subject: [PATCH 280/324] Fix sentence structure in fft.md (#1408) --- src/algebra/fft.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/fft.md b/src/algebra/fft.md index 2e6532558..38ed620a5 100644 --- a/src/algebra/fft.md +++ b/src/algebra/fft.md @@ -97,7 +97,7 @@ It is easy to see that $$A(x) = A_0(x^2) + x A_1(x^2).$$ -The polynomials $A_0$ and $A_1$ are only half as much coefficients as the polynomial $A$. +The polynomials $A_0$ and $A_1$ have only half as many coefficients as the polynomial $A$. If we can compute the $\text{DFT}(A)$ in linear time using $\text{DFT}(A_0)$ and $\text{DFT}(A_1)$, then we get the recurrence $T_{\text{DFT}}(n) = 2 T_{\text{DFT}}\left(\frac{n}{2}\right) + O(n)$ for the time complexity, which results in $T_{\text{DFT}}(n) = O(n \log n)$ by the **master theorem**. Let's learn how we can accomplish that. From dfacc51299dabd2086acbcb1b7cc50ea03dcc139 Mon Sep 17 00:00:00 2001 From: Konstantinos Alatzas Date: Thu, 9 Jan 2025 21:52:23 +0200 Subject: [PATCH 281/324] Fix typos in intro-to-dp.md (#1411) * Fix typo in intro-to-dp.md * Fix grammatical error in intro-to-dp.md * Fix typo in intro-to-dp.md --- src/dynamic_programming/intro-to-dp.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index ab2d3a1a3..8bc33fd82 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -7,7 +7,7 @@ tags: The essence of dynamic programming is to avoid repeated calculation. Often, dynamic programming problems are naturally solvable by recursion. In such cases, it's easiest to write the recursive solution, then save repeated states in a lookup table. This process is known as top-down dynamic programming with memoization. That's read "memoization" (like we are writing in a memo pad) not memorization. -One of the most basic, classic examples of this process is the fibonacci sequence. It's recursive formulation is $f(n) = f(n-1) + f(n-2)$ where $n \ge 2$ and $f(0)=0$ and $f(1)=1$. In C++, this would be expressed as: +One of the most basic, classic examples of this process is the fibonacci sequence. Its recursive formulation is $f(n) = f(n-1) + f(n-2)$ where $n \ge 2$ and $f(0)=0$ and $f(1)=1$. In C++, this would be expressed as: ```cpp int f(int n) { @@ -25,7 +25,7 @@ Our recursive function currently solves fibonacci in exponential time. This mean To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2), \dots ,f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work! -Each recursive call will check against a lookup table to see if the value has been calculated. This is done in $O(1)$ time. If we have previously calcuated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$. This is an enormous improvement over our previous exponential time algorithm! +Each recursive call will check against a lookup table to see if the value has been calculated. This is done in $O(1)$ time. If we have previously calculated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$. This is an enormous improvement over our previous exponential time algorithm! ```cpp const int MAXN = 100; @@ -88,7 +88,7 @@ This approach is called top-down, as we can call the function with a query value Until now you've only seen top-down dynamic programming with memoization. However, we can also solve problems with bottom-up dynamic programming. Bottom-up is exactly the opposite of top-down, you start at the bottom (base cases of the recursion), and extend it to more and more values. -To create a bottom-up approach for fibonacci numbers, we initilize the base cases in an array. Then, we simply use the recursive definition on array: +To create a bottom-up approach for fibonacci numbers, we initialize the base cases in an array. Then, we simply use the recursive definition on array: ```cpp const int MAXN = 100; From f2f28eacb0df8c29ecaac61fbcd02c5b624d797e Mon Sep 17 00:00:00 2001 From: ericmiranda7 <47097072+ericmiranda7@users.noreply.github.com> Date: Fri, 10 Jan 2025 07:05:26 +0530 Subject: [PATCH 282/324] Update tortoise_and_hare.md the happy number problem can be solved using a hashset, but it can also be solved in constant space using floyd's algo --- src/others/tortoise_and_hare.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/others/tortoise_and_hare.md b/src/others/tortoise_and_hare.md index 385ca69f4..9d2ba0a96 100644 --- a/src/others/tortoise_and_hare.md +++ b/src/others/tortoise_and_hare.md @@ -110,5 +110,6 @@ And since we let the slow pointer start at the start of the linked list, after $ # Problems: - [Linked List Cycle (EASY)](https://leetcode.com/problems/linked-list-cycle/) +- [Happy Number (Easy)](https://leetcode.com/problems/happy-number/) - [Find the Duplicate Number (Medium)](https://leetcode.com/problems/find-the-duplicate-number/) From 2ebecc186d57ecdd55b04dfd727005e98f8eb81e Mon Sep 17 00:00:00 2001 From: hazzlerr <127759611+hazzlerr@users.noreply.github.com> Date: Mon, 13 Jan 2025 17:21:04 -0500 Subject: [PATCH 283/324] Bug in the face order check (#1394) * Fixed face order check * Update test_planar_faces.cpp --- src/geometry/planar.md | 20 +++++++------------- test/test_planar_faces.cpp | 26 ++++++++++++++++++++++++++ 2 files changed, 33 insertions(+), 13 deletions(-) diff --git a/src/geometry/planar.md b/src/geometry/planar.md index 57c91ae2d..017399a7c 100644 --- a/src/geometry/planar.md +++ b/src/geometry/planar.md @@ -125,20 +125,14 @@ std::vector> find_faces(std::vector vertices, std::ve e = e1; } std::reverse(face.begin(), face.end()); - int sign = 0; - for (size_t j = 0; j < face.size(); j++) { - size_t j1 = (j + 1) % face.size(); - size_t j2 = (j + 2) % face.size(); - int64_t val = vertices[face[j]].cross(vertices[face[j1]], vertices[face[j2]]); - if (val > 0) { - sign = 1; - break; - } else if (val < 0) { - sign = -1; - break; - } + Point p1 = vertices[face[0]]; + __int128 sum = 0; + for (int j = 0; j < face.size(); ++j) { + Point p2 = vertices[face[j]]; + Point p3 = vertices[face[(j + 1) % face.size()]]; + sum += (p2 - p1).cross(p3 - p2); } - if (sign <= 0) { + if (sum <= 0) { faces.insert(faces.begin(), face); } else { faces.emplace_back(face); diff --git a/test/test_planar_faces.cpp b/test/test_planar_faces.cpp index 731bae4db..662c88cfb 100644 --- a/test/test_planar_faces.cpp +++ b/test/test_planar_faces.cpp @@ -93,8 +93,34 @@ void test_cycle_with_chain() { assert(equal_cycles(faces[1], {2, 5, 4, 5, 2, 1, 0, 3})); } +void test_ccw_angle() { + std::vector p = { + Point(0, 2), + Point(1, 3), + Point(0, 1), + Point(1, 0), + Point(-1, 0), + Point(-1, 3) + }; + + std::vector> adj = { + {1, 5}, + {0, 2}, + {1, 3}, + {2, 4}, + {3, 5}, + {0, 4} + }; + + auto faces = find_faces(p, adj); + assert(faces.size() == 2u); + assert(equal_cycles(faces[0], {0, 1, 2, 3, 4, 5})); + assert(equal_cycles(faces[1], {5, 4, 3, 2, 1, 0})); +} + int main() { test_simple(); test_degenerate(); test_cycle_with_chain(); + test_ccw_angle(); } From 6206986ddde5ac40f3d42464841e0abba27582f0 Mon Sep 17 00:00:00 2001 From: Mirco Paul <63196848+Electron1997@users.noreply.github.com> Date: Mon, 27 Jan 2025 14:49:02 +0100 Subject: [PATCH 284/324] Add HLD problems (roughly sorted by difficulty) --- src/graph/hld.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/graph/hld.md b/src/graph/hld.md index 2373cf391..bd56798fd 100644 --- a/src/graph/hld.md +++ b/src/graph/hld.md @@ -183,3 +183,8 @@ int query(int a, int b) { ## Practice problems - [SPOJ - QTREE - Query on a tree](https://www.spoj.com/problems/QTREE/) +- [CSES - Path Queries II](https://cses.fi/problemset/task/2134) +- [Codeforces - Subway Lines](https://codeforces.com/gym/101908/problem/L) +- [Codeforces - Tree Queries](https://codeforces.com/contest/1254/problem/D) +- [Codeforces - Tree or not Tree](https://codeforces.com/contest/117/problem/E) +- [Codeforces - The Tree](https://codeforces.com/contest/1017/problem/G) From d1fc330574b679203fedf85d978d03d5ea31e010 Mon Sep 17 00:00:00 2001 From: RED1 <34243699+SuperMinerYYT@users.noreply.github.com> Date: Mon, 27 Jan 2025 23:49:43 +0100 Subject: [PATCH 285/324] Update longest_increasing_subsequence.md fixed a typo (in the longest non decreasing subsequence section, it says "and make a slightly modification") --- src/sequences/longest_increasing_subsequence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/sequences/longest_increasing_subsequence.md b/src/sequences/longest_increasing_subsequence.md index c72e933a8..a4c8f0fe4 100644 --- a/src/sequences/longest_increasing_subsequence.md +++ b/src/sequences/longest_increasing_subsequence.md @@ -300,7 +300,7 @@ This is in fact nearly the same problem. Only now it is allowed to use identical numbers in the subsequence. The solution is essentially also nearly the same. -We just have to change the inequality signs, and make a slightly modification to the binary search. +We just have to change the inequality signs, and make a slight modification to the binary search. ### Number of longest increasing subsequences From 670cbaa2c8cfd742993a43542258f8ebd6a320c8 Mon Sep 17 00:00:00 2001 From: Mirco Paul <63196848+Electron1997@users.noreply.github.com> Date: Thu, 6 Feb 2025 22:26:53 +0100 Subject: [PATCH 286/324] Add practice problems Add some Burnside lemma practice problems sorted by difficulty --- src/combinatorics/burnside.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/combinatorics/burnside.md b/src/combinatorics/burnside.md index fa799399c..894b1e87d 100644 --- a/src/combinatorics/burnside.md +++ b/src/combinatorics/burnside.md @@ -266,3 +266,8 @@ int solve(int n, int m) { return sum / s.size(); } ``` +## Practice Problems +* [CSES - Counting Necklaces](https://cses.fi/problemset/task/2209) +* [CSES - Counting Grids](https://cses.fi/problemset/task/2210) +* [Codeforces - Buildings](https://codeforces.com/gym/101873/problem/B) +* [CS Academy - Cube Coloring](https://csacademy.com/contest/beta-round-8/task/cube-coloring/) From dc29826f28c7960efd657816a6dd4af22caf0ac7 Mon Sep 17 00:00:00 2001 From: CDTheGod <158807586+CDTheGod@users.noreply.github.com> Date: Wed, 5 Mar 2025 18:39:38 +0530 Subject: [PATCH 287/324] Update divide-and-conquer-dp.md --- src/dynamic_programming/divide-and-conquer-dp.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/dynamic_programming/divide-and-conquer-dp.md b/src/dynamic_programming/divide-and-conquer-dp.md index 457e17c24..a33cc7b45 100644 --- a/src/dynamic_programming/divide-and-conquer-dp.md +++ b/src/dynamic_programming/divide-and-conquer-dp.md @@ -113,7 +113,7 @@ both! - [SPOJ - LARMY](https://www.spoj.com/problems/LARMY/) - [SPOJ - NKLEAVES](https://www.spoj.com/problems/NKLEAVES/) - [Timus - Bicolored Horses](https://acm.timus.ru/problem.aspx?space=1&num=1167) -- [USACO - Circular Barn](http://www.usaco.org/index.php?page=viewproblem2&cpid=616) +- [USACO - Circular Barn](https://usaco.org/index.php?page=viewproblem2&cpid=626) - [UVA - Arranging Heaps](https://onlinejudge.org/external/125/12524.pdf) - [UVA - Naming Babies](https://onlinejudge.org/external/125/12594.pdf) From b861d1ec80d7e08ff7ebfc71c8cf262db65cf346 Mon Sep 17 00:00:00 2001 From: Franklin Date: Sat, 8 Mar 2025 02:54:08 -0500 Subject: [PATCH 288/324] Update topological-sort.md In the implementation example, the if statement within the dfs function is missing a pair of brackets. --- src/graph/topological-sort.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/graph/topological-sort.md b/src/graph/topological-sort.md index f8e8e9218..acfad5331 100644 --- a/src/graph/topological-sort.md +++ b/src/graph/topological-sort.md @@ -65,8 +65,9 @@ vector ans; void dfs(int v) { visited[v] = true; for (int u : adj[v]) { - if (!visited[u]) + if (!visited[u]) { dfs(u); + } } ans.push_back(v); } From 2c8dc7b96a5bd484eaa1715656ef6d474abf352c Mon Sep 17 00:00:00 2001 From: Jakob Kogler Date: Sun, 9 Mar 2025 13:30:00 +0100 Subject: [PATCH 289/324] Remove deprecated tags plugin flag --- mkdocs.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mkdocs.yml b/mkdocs.yml index ed15e7068..60f512bc6 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -61,8 +61,7 @@ plugins: hooks: on_env: "hooks:on_env" - search - - tags: - tags_file: tags.md + - tags - literate-nav: nav_file: navigation.md - git-revision-date-localized: From bbe3477bc857e34ad82dad40a8442da3a7204ab7 Mon Sep 17 00:00:00 2001 From: Jakob Kogler Date: Sun, 9 Mar 2025 13:43:34 +0100 Subject: [PATCH 290/324] Make tags index work again --- src/tags.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tags.md b/src/tags.md index c462960cd..6759f272e 100644 --- a/src/tags.md +++ b/src/tags.md @@ -2,4 +2,4 @@ This file contains a global index of all tags used on the pages. -[TAGS] \ No newline at end of file + From 42952998071723361d0ec88314183af6f35f2fd6 Mon Sep 17 00:00:00 2001 From: "Yurii A." Date: Mon, 24 Mar 2025 21:49:18 +0200 Subject: [PATCH 291/324] Manacher's algorithm testcases (#1393) * Fix inconsistencies in Manacher's algorithm * Add test for manacher_odd * Update test_manacher_odd.cpp * empty commit to run tests? --------- Co-authored-by: Yurii A. Co-authored-by: Oleksandr Kulkov --- src/string/manacher.md | 8 ++--- test/test_manacher_odd.cpp | 74 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+), 4 deletions(-) create mode 100644 test/test_manacher_odd.cpp diff --git a/src/string/manacher.md b/src/string/manacher.md index 2c7cb35b6..0c8bd5928 100644 --- a/src/string/manacher.md +++ b/src/string/manacher.md @@ -49,7 +49,7 @@ Such an algorithm is slow, it can calculate the answer only in $O(n^2)$. The implementation of the trivial algorithm is: ```cpp -vector manacher_odd(string s) { +vector manacher_odd_trivial(string s) { int n = s.size(); s = "$" + s + "^"; vector p(n + 2); @@ -68,7 +68,7 @@ Terminal characters `$` and `^` were used to avoid dealing with ends of the stri We describe the algorithm to find all the sub-palindromes with odd length, i. e. to calculate $d_{odd}[]$. -For fast calculation we'll maintain the **borders $(l, r)$** of the rightmost found (sub-)palindrome (i. e. the current rightmost (sub-)palindrome is $s[l+1] s[l+2] \dots s[r-1]$). Initially we set $l = 0, r = 1$, which corresponds to the empty string. +For fast calculation we'll maintain the **exclusive borders $(l, r)$** of the rightmost found (sub-)palindrome (i. e. the current rightmost (sub-)palindrome is $s[l+1] s[l+2] \dots s[r-1]$). Initially we set $l = 0, r = 1$, which corresponds to the empty string. So, we want to calculate $d_{odd}[i]$ for the next $i$, and all the previous values in $d_{odd}[]$ have been already calculated. We do the following: @@ -140,12 +140,12 @@ For calculating $d_{odd}[]$, we get the following code. Things to note: - The while loop denotes the trivial algorithm. We launch it irrespective of the value of $k$. - If the size of palindrome centered at $i$ is $x$, then $d_{odd}[i]$ stores $\frac{x+1}{2}$. -```cpp +```{.cpp file=manacher_odd} vector manacher_odd(string s) { int n = s.size(); s = "$" + s + "^"; vector p(n + 2); - int l = 1, r = 1; + int l = 0, r = 1; for(int i = 1; i <= n; i++) { p[i] = max(0, min(r - i, p[l + (r - i)])); while(s[i - p[i]] == s[i + p[i]]) { diff --git a/test/test_manacher_odd.cpp b/test/test_manacher_odd.cpp new file mode 100644 index 000000000..cd46b7488 --- /dev/null +++ b/test/test_manacher_odd.cpp @@ -0,0 +1,74 @@ +#include +using namespace std; + +#include "manacher_odd.h" + +string getRandomString(size_t n, uint32_t seed, char minLetter='a', char maxLetter='b') { + assert(minLetter <= maxLetter); + const size_t nLetters = static_cast(maxLetter) - static_cast(minLetter) + 1; + std::uniform_int_distribution distr(0, nLetters - 1); + std::mt19937 gen(seed); + + string res; + res.reserve(n); + + for (size_t i = 0; i < n; ++i) + res.push_back('a' + distr(gen)); + + return res; +} + +bool testManacherOdd(const std::string &s) { + const auto n = s.size(); + const auto d_odd = manacher_odd(s); + + if (d_odd.size() != n) + return false; + + const auto inRange = [&](size_t idx) { + return idx >= 0 && idx < n; + }; + + for (size_t i = 0; i < n; ++i) { + if (d_odd[i] < 0) + return false; + for (int d = 0; d < d_odd[i]; ++d) { + const auto idx1 = i - d; + const auto idx2 = i + d; + + if (!inRange(idx1) || !inRange(idx2)) + return false; + if (s[idx1] != s[idx2]) + return false; + } + + const auto idx1 = i - d_odd[i]; + const auto idx2 = i + d_odd[i]; + if (inRange(idx1) && inRange(idx2) && s[idx1] == s[idx2]) + return false; + } + + return true; +} + +int main() { + vector testCases; + + testCases.push_back(""); + for (size_t i = 1; i <= 25; ++i) { + auto s = string{}; + s.resize(i, 'a'); + testCases.push_back(move(s)); + } + testCases.push_back("abba"); + testCases.push_back("abccbaasd"); + for (size_t n = 9; n <= 100; n += 10) + testCases.push_back(getRandomString(n, /* seed */ n, 'a', 'd')); + for (size_t n = 7; n <= 100; n += 10) + testCases.push_back(getRandomString(n, /* seed */ n)); + + for (const auto &s: testCases) + assert(testManacherOdd(s)); + + return 0; +} From 82a06ff7246bc161dfc6a176f2adf79dba19f72b Mon Sep 17 00:00:00 2001 From: Yury Semenov Date: Mon, 24 Mar 2025 23:15:38 +0300 Subject: [PATCH 292/324] Added a paragraph on golden section search (#1119) * . * Update ternary_search.md --------- Co-authored-by: Oleksandr Kulkov --- src/num_methods/ternary_search.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/src/num_methods/ternary_search.md b/src/num_methods/ternary_search.md index 59afaaa82..7e73c7771 100644 --- a/src/num_methods/ternary_search.md +++ b/src/num_methods/ternary_search.md @@ -57,6 +57,20 @@ If $f(x)$ takes integer parameter, the interval $[l, r]$ becomes discrete. Since The difference occurs in the stopping criterion of the algorithm. Ternary search will have to stop when $(r - l) < 3$, because in that case we can no longer select $m_1$ and $m_2$ to be different from each other as well as from $l$ and $r$, and this can cause an infinite loop. Once $(r - l) < 3$, the remaining pool of candidate points $(l, l + 1, \ldots, r)$ needs to be checked to find the point which produces the maximum value $f(x)$. +### Golden section search + +In some cases computing $f(x)$ may be quite slow, but reducing the number of iterations is infeasible due to precision issues. Fortunately, it is possible to compute $f(x)$ only once at each iteration (except the first one). + +To see how to do this, let's revisit the selection method for $m_1$ and $m_2$. Suppose that we select $m_1$ and $m_2$ on $[l, r]$ in such a way that $\frac{r - l}{r - m_1} = \frac{r - l}{m_2 - l} = \varphi$ where $\varphi$ is some constant. In order to reduce the amount of computations, we want to select such $\varphi$ that on the next iteration one of the new evaluation points $m_1'$, $m_2'$ will coincide with either $m_1$ or $m_2$, so that we can reuse the already computed function value. + +Now suppose that after the current iteration we set $l = m_1$. Then the point $m_1'$ will satisfy $\frac{r - m_1}{r - m_1'} = \varphi$. We want this point to coincide with $m_2$, meaning that $\frac{r - m_1}{r - m_2} = \varphi$. + +Multiplying both sides of $\frac{r - m_1}{r - m_2} = \varphi$ by $\frac{r - m_2}{r - l}$ we obtain $\frac{r - m_1}{r - l} = \varphi\frac{r - m_2}{r - l}$. Note that $\frac{r - m_1}{r - l} = \frac{1}{\varphi}$ and $\frac{r - m_2}{r - l} = \frac{r - l + l - m_2}{r - l} = 1 - \frac{1}{\varphi}$. Substituting that and multiplying by $\varphi$, we obtain the following equation: + +$\varphi^2 - \varphi - 1 = 0$ + +This is a well-known golden section equation. Solving it yields $\frac{1 \pm \sqrt{5}}{2}$. Since $\varphi$ must be positive, we obtain $\varphi = \frac{1 + \sqrt{5}}{2}$. By applying the same logic to the case when we set $r = m_2$ and want $m_2'$ to coincide with $m_1$, we obtain the same value of $\varphi$ as well. So, if we choose $m_1 = l + \frac{r - l}{1 + \varphi}$ and $m_2 = r - \frac{r - l}{1 + \varphi}$, on each iteration we can re-use one of the values $f(x)$ computed on the previous iteration. + ## Implementation ```cpp @@ -81,6 +95,7 @@ Here `eps` is in fact the absolute error (not taking into account errors due to Instead of the criterion `r - l > eps`, we can select a constant number of iterations as a stopping criterion. The number of iterations should be chosen to ensure the required accuracy. Typically, in most programming challenges the error limit is ${10}^{-6}$ and thus 200 - 300 iterations are sufficient. Also, the number of iterations doesn't depend on the values of $l$ and $r$, so the number of iterations corresponds to the required relative error. ## Practice Problems + - [Codeforces - New Bakery](https://codeforces.com/problemset/problem/1978/B) - [Codechef - Race time](https://www.codechef.com/problems/AMCS03) - [Hackerearth - Rescuer](https://www.hackerearth.com/problem/algorithm/rescuer-2d2495cb/) @@ -95,5 +110,8 @@ Instead of the criterion `r - l > eps`, we can select a constant number of itera * [Codeforces - Devu and his Brother](https://codeforces.com/problemset/problem/439/D) * [Codechef - Is This JEE ](https://www.codechef.com/problems/ICM2003) * [Codeforces - Restorer Distance](https://codeforces.com/contest/1355/problem/E) +* [TIMUS 1058 Chocolate](https://acm.timus.ru/problem.aspx?space=1&num=1058) +* [TIMUS 1436 Billboard](https://acm.timus.ru/problem.aspx?space=1&num=1436) +* [TIMUS 1451 Beerhouse Tale](https://acm.timus.ru/problem.aspx?space=1&num=1451) * [TIMUS 1719 Kill the Shaitan-Boss](https://acm.timus.ru/problem.aspx?space=1&num=1719) * [TIMUS 1913 Titan Ruins: Alignment of Forces](https://acm.timus.ru/problem.aspx?space=1&num=1913) From 7c1254ba49761d3eaedb7e7387c3489a1e5647de Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Mon, 24 Mar 2025 21:26:18 +0100 Subject: [PATCH 293/324] Empty commit for GitHub Actions --- src/num_methods/simulated_annealing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/num_methods/simulated_annealing.md b/src/num_methods/simulated_annealing.md index abd035950..a5f6b466a 100644 --- a/src/num_methods/simulated_annealing.md +++ b/src/num_methods/simulated_annealing.md @@ -201,4 +201,4 @@ bool P(double E, double E_next, double T, mt19937 rng) { - [USACO Jan 2017 - Subsequence Reversal](https://usaco.org/index.php?page=viewproblem2&cpid=698) - [Deltix Summer 2021 - DIY Tree](https://codeforces.com/contest/1556/problem/H) -- [AtCoder Contest Scheduling](https://atcoder.jp/contests/intro-heuristics/tasks/intro_heuristics_a) \ No newline at end of file +- [AtCoder Contest Scheduling](https://atcoder.jp/contests/intro-heuristics/tasks/intro_heuristics_a) From 9b8c5b638057a351ed79439d9ebd73002190fd81 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Mon, 24 Mar 2025 21:33:53 +0100 Subject: [PATCH 294/324] Close
tag --- src/num_methods/simulated_annealing.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/num_methods/simulated_annealing.md b/src/num_methods/simulated_annealing.md index a5f6b466a..f9bd1686d 100644 --- a/src/num_methods/simulated_annealing.md +++ b/src/num_methods/simulated_annealing.md @@ -44,6 +44,7 @@ At the same time we also keep a track of the best state $s_{best}$ across all it A visual representation of simulated annealing, searching for the maxima of this function with multiple local maxima..
This animation by [Kingpin13](https://commons.wikimedia.org/wiki/User:Kingpin13) is distributed under CC0 1.0 license. +
### Temperature($T$) and decay($u$) From e24c67450d38ffdde84b1191a85f9c4b1687d054 Mon Sep 17 00:00:00 2001 From: FloppaInspector <136809316+FloppaInspector@users.noreply.github.com> Date: Sun, 6 Apr 2025 00:39:54 +0900 Subject: [PATCH 295/324] Update knuth-optimization.md In time complexity proof cancellation explanation, j=N -> j=N-1 --- src/dynamic_programming/knuth-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/dynamic_programming/knuth-optimization.md b/src/dynamic_programming/knuth-optimization.md index 535c7caf2..35978b839 100644 --- a/src/dynamic_programming/knuth-optimization.md +++ b/src/dynamic_programming/knuth-optimization.md @@ -77,7 +77,7 @@ $$ \sum\limits_{i=1}^N \sum\limits_{j=i}^{N-1} [opt(i+1,j+1)-opt(i,j)]. $$ -As you see, most of the terms in this expression cancel each other out, except for positive terms with $j=N$ and negative terms with $i=1$. Thus, the whole sum can be estimated as +As you see, most of the terms in this expression cancel each other out, except for positive terms with $j=N-1$ and negative terms with $i=1$. Thus, the whole sum can be estimated as $$ \sum\limits_{k=1}^N[opt(k,N)-opt(1,k)] = O(n^2), From 1e3a93051eb22ad0238d3265246b650ce8a8954b Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Tue, 8 Apr 2025 11:45:34 +0200 Subject: [PATCH 296/324] Remove navigation tabs, add toggle to hide sidebar (#1440) --- .github/workflows/build.yml | 2 +- mkdocs.yml | 3 ++- scripts/install-mkdocs.sh | 1 + 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index e820830eb..37a634b16 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -19,7 +19,7 @@ jobs: - name: Set up Python uses: actions/setup-python@v5.2.0 with: - python-version: '3.8' + python-version: '3.11' - name: Install mkdocs-material run: | scripts/install-mkdocs.sh diff --git a/mkdocs.yml b/mkdocs.yml index 60f512bc6..466463564 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -22,7 +22,6 @@ theme: icon: repo: fontawesome/brands/github features: - - navigation.tabs - toc.integrate - search.suggest - content.code.copy @@ -57,6 +56,8 @@ markdown_extensions: permalink: true plugins: + - toggle-sidebar: + toggle_button: all - mkdocs-simple-hooks: hooks: on_env: "hooks:on_env" diff --git a/scripts/install-mkdocs.sh b/scripts/install-mkdocs.sh index 4202c9a97..9c4e73c3f 100755 --- a/scripts/install-mkdocs.sh +++ b/scripts/install-mkdocs.sh @@ -2,6 +2,7 @@ pip install \ "mkdocs-material>=9.0.2" \ + mkdocs-toggle-sidebar-plugin \ mkdocs-macros-plugin \ mkdocs-literate-nav \ mkdocs-git-authors-plugin \ From 21299e74b39389acb6bf8ad257d65cc1875ffa93 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 14 Apr 2025 22:32:08 -0400 Subject: [PATCH 297/324] add_script updates? --- src/algebra/factorization.md | 4 +++- src/algebra/sieve-of-eratosthenes.md | 4 +++- src/data_structures/fenwick.md | 4 +++- src/geometry/basic-geometry.md | 16 ++++++++++++---- src/geometry/convex_hull_trick.md | 8 ++++++-- src/geometry/delaunay.md | 4 +++- src/geometry/intersecting_segments.md | 12 +++++++++--- src/geometry/lattice-points.md | 8 ++++++-- src/geometry/manhattan-distance.md | 8 ++++++-- src/geometry/minkowski.md | 4 +++- src/geometry/planar.md | 8 ++++++-- src/geometry/point-location.md | 8 ++++++-- src/geometry/tangents-to-two-circles.md | 4 +++- src/geometry/vertical_decomposition.md | 4 +++- src/graph/2SAT.md | 8 ++++++-- src/graph/edmonds_karp.md | 16 ++++++++++++---- src/graph/hld.md | 4 +++- src/graph/lca.md | 4 +++- src/graph/lca_farachcoltonbender.md | 4 +++- src/graph/rmq_linear.md | 8 ++++++-- src/graph/topological-sort.md | 6 +++--- src/others/stern_brocot_tree_farey_sequences.md | 4 +++- src/others/tortoise_and_hare.md | 12 +++++++++--- 23 files changed, 120 insertions(+), 42 deletions(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 58a43b961..11bf4049c 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -246,7 +246,9 @@ If $p$ is smaller than $\sqrt{n}$, the repetition will likely start in $O(\sqrt[ Here is a visualization of such a sequence $\{x_i \bmod p\}$ with $n = 2206637$, $p = 317$, $x_0 = 2$ and $f(x) = x^2 + 1$. From the form of the sequence you can see very clearly why the algorithm is called Pollard's $\rho$ algorithm. -
![Pollard's rho visualization](pollard_rho.png)
+
+ Pollard's rho visualization +
Yet, there is still an open question. How can we exploit the properties of the sequence $\{x_i \bmod p\}$ to our advantage without even knowing the number $p$ itself? diff --git a/src/algebra/sieve-of-eratosthenes.md b/src/algebra/sieve-of-eratosthenes.md index a07e6dc02..560d176e7 100644 --- a/src/algebra/sieve-of-eratosthenes.md +++ b/src/algebra/sieve-of-eratosthenes.md @@ -19,7 +19,9 @@ And we continue this procedure until we have processed all numbers in the row. In the following image you can see a visualization of the algorithm for computing all prime numbers in the range $[1; 16]$. It can be seen, that quite often we mark numbers as composite multiple times. -
![Sieve of Eratosthenes](sieve_eratosthenes.png)
+
+ Sieve of Eratosthenes +
The idea behind is this: A number is prime, if none of the smaller prime numbers divides it. diff --git a/src/data_structures/fenwick.md b/src/data_structures/fenwick.md index 82755b452..439885b83 100644 --- a/src/data_structures/fenwick.md +++ b/src/data_structures/fenwick.md @@ -128,7 +128,9 @@ where $|$ is the bitwise OR operator. The following image shows a possible interpretation of the Fenwick tree as tree. The nodes of the tree show the ranges they cover. -
![Binary Indexed Tree](binary_indexed_tree.png)
+
+ Binary Indexed Tree +
## Implementation diff --git a/src/geometry/basic-geometry.md b/src/geometry/basic-geometry.md index 4c052a784..89acb1642 100644 --- a/src/geometry/basic-geometry.md +++ b/src/geometry/basic-geometry.md @@ -112,7 +112,9 @@ The dot (or scalar) product $\mathbf a \cdot \mathbf b$ for vectors $\mathbf a$ Geometrically it is product of the length of the first vector by the length of the projection of the second vector onto the first one. As you may see from the image below this projection is nothing but $|\mathbf a| \cos \theta$ where $\theta$ is the angle between $\mathbf a$ and $\mathbf b$. Thus $\mathbf a\cdot \mathbf b = |\mathbf a| \cos \theta \cdot |\mathbf b|$. -
![](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Dot_Product.svg/300px-Dot_Product.svg.png)
+
+ +
The dot product holds some notable properties: @@ -181,7 +183,9 @@ To see the next important property we should take a look at the set of points $\ You can see that this set of points is exactly the set of points for which the projection onto $\mathbf a$ is the point $C \cdot \dfrac{\mathbf a}{|\mathbf a|}$ and they form a hyperplane orthogonal to $\mathbf a$. You can see the vector $\mathbf a$ alongside with several such vectors having same dot product with it in 2D on the picture below: -
![Vectors having same dot product with a](https://i.imgur.com/eyO7St4.png)
+
+ Vectors having same dot product with a +
In 2D these vectors will form a line, in 3D they will form a plane. Note that this result allows us to define a line in 2D as $\mathbf r\cdot \mathbf n=C$ or $(\mathbf r - \mathbf r_0)\cdot \mathbf n=0$ where $\mathbf n$ is vector orthogonal to the line and $\mathbf r_0$ is any vector already present on the line and $C = \mathbf r_0\cdot \mathbf n$. @@ -192,14 +196,18 @@ In the same manner a plane can be defined in 3D. ### Definition Assume you have three vectors $\mathbf a$, $\mathbf b$ and $\mathbf c$ in 3D space joined in a parallelepiped as in the picture below: -
![Three vectors](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Parallelepiped_volume.svg/240px-Parallelepiped_volume.svg.png)
+
+ Three vectors +
How would you calculate its volume? From school we know that we should multiply the area of the base with the height, which is projection of $\mathbf a$ onto direction orthogonal to base. That means that if we define $\mathbf b \times \mathbf c$ as the vector which is orthogonal to both $\mathbf b$ and $\mathbf c$ and which length is equal to the area of the parallelogram formed by $\mathbf b$ and $\mathbf c$ then $|\mathbf a\cdot (\mathbf b\times\mathbf c)|$ will be equal to the volume of the parallelepiped. For integrity we will say that $\mathbf b\times \mathbf c$ will be always directed in such way that the rotation from the vector $\mathbf b$ to the vector $\mathbf c$ from the point of $\mathbf b\times \mathbf c$ is always counter-clockwise (see the picture below). -
![cross product](https://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/Cross_product_vector.svg/250px-Cross_product_vector.svg.png)
+
+ cross product +
This defines the cross (or vector) product $\mathbf b\times \mathbf c$ of the vectors $\mathbf b$ and $\mathbf c$ and the triple product $\mathbf a\cdot(\mathbf b\times \mathbf c)$ of the vectors $\mathbf a$, $\mathbf b$ and $\mathbf c$. diff --git a/src/geometry/convex_hull_trick.md b/src/geometry/convex_hull_trick.md index f9c938fb1..052073e2c 100644 --- a/src/geometry/convex_hull_trick.md +++ b/src/geometry/convex_hull_trick.md @@ -17,7 +17,9 @@ The idea of this approach is to maintain a lower convex hull of linear functions Actually it would be a bit more convenient to consider them not as linear functions, but as points $(k;b)$ on the plane such that we will have to find the point which has the least dot product with a given point $(x;1)$, that is, for this point $kx+b$ is minimized which is the same as initial problem. Such minimum will necessarily be on lower convex envelope of these points as can be seen below: -
![lower convex hull](convex_hull_trick.png)
+
+ lower convex hull +
One has to keep points on the convex hull and normal vectors of the hull's edges. When you have a $(x;1)$ query you'll have to find the normal vector closest to it in terms of angles between them, then the optimum linear function will correspond to one of its endpoints. @@ -90,7 +92,9 @@ Assume we're in some vertex corresponding to half-segment $[l,r)$ and the functi Here is the illustration of what is going on in the vertex when we add new function: -
![Li Chao Tree vertex](li_chao_vertex.png)
+
+ Li Chao Tree vertex +
Let's go to implementation now. Once again we will use complex numbers to keep linear functions. diff --git a/src/geometry/delaunay.md b/src/geometry/delaunay.md index c5f5c22d1..68d67ce8f 100644 --- a/src/geometry/delaunay.md +++ b/src/geometry/delaunay.md @@ -30,7 +30,9 @@ Because of the duality, we only need a fast algorithm to compute only one of $V$ ## Quad-edge data structure During the algorithm $D$ will be stored inside the quad-edge data structure. This structure is described in the picture: -
![Quad-Edge](quad-edge.png)
+
+ Quad-Edge +
In the algorithm we will use the following functions on edges: diff --git a/src/geometry/intersecting_segments.md b/src/geometry/intersecting_segments.md index a5eba5f6b..ac26c8fd5 100644 --- a/src/geometry/intersecting_segments.md +++ b/src/geometry/intersecting_segments.md @@ -16,18 +16,24 @@ The naive solution algorithm is to iterate over all pairs of segments in $O(n^2) Let's draw a vertical line $x = -\infty$ mentally and start moving this line to the right. In the course of its movement, this line will meet with segments, and at each time a segment intersect with our line it intersects in exactly one point (we will assume that there are no vertical segments). -
![sweep line and line segment intersection](sweep_line_1.png)
+
+ sweep line and line segment intersection +
Thus, for each segment, at some point in time, its point will appear on the sweep line, then with the movement of the line, this point will move, and finally, at some point, the segment will disappear from the line. We are interested in the **relative order of the segments** along the vertical. Namely, we will store a list of segments crossing the sweep line at a given time, where the segments will be sorted by their $y$-coordinate on the sweep line. -
![relative order of the segments across sweep line](sweep_line_2.png)
+
+ relative order of the segments across sweep line +
This order is interesting because intersecting segments will have the same $y$-coordinate at least at one time: -
![intersection point having same y-coordinate](sweep_line_3.png)
+
+ intersection point having same y-coordinate +
We formulate key statements: diff --git a/src/geometry/lattice-points.md b/src/geometry/lattice-points.md index 8bd9db346..e3d6faf7e 100644 --- a/src/geometry/lattice-points.md +++ b/src/geometry/lattice-points.md @@ -38,7 +38,9 @@ We have two cases: This amount is the same as the number of points such that $0 < y \leq (k - \lfloor k \rfloor) \cdot x + (b - \lfloor b \rfloor)$. So we reduced our problem to $k'= k - \lfloor k \rfloor$, $b' = b - \lfloor b \rfloor$ and both $k'$ and $b'$ less than $1$ now. Here is a picture, we just summed up blue points and subtracted the blue linear function from the black one to reduce problem to smaller values for $k$ and $b$: -
![Subtracting floored linear function](lattice.png)
+
+ Subtracting floored linear function +
- $k < 1$ and $b < 1$. @@ -51,7 +53,9 @@ We have two cases: which returns us back to the case $k>1$. You can see new reference point $O'$ and axes $X'$ and $Y'$ in the picture below: -
![New reference and axes](mirror.png)
+
+ New reference and axes +
As you see, in new reference system linear function will have coefficient $\tfrac 1 k$ and its zero will be in the point $\lfloor k\cdot n + b \rfloor-(k\cdot n+b)$ which makes formula above correct. ## Complexity analysis diff --git a/src/geometry/manhattan-distance.md b/src/geometry/manhattan-distance.md index 2aeb746af..4c725626e 100644 --- a/src/geometry/manhattan-distance.md +++ b/src/geometry/manhattan-distance.md @@ -12,7 +12,9 @@ $$d(p,q) = |x_p - x_q| + |y_p - y_q|$$ Defined this way, the distance corresponds to the so-called [Manhattan (taxicab) geometry](https://en.wikipedia.org/wiki/Taxicab_geometry), in which the points are considered intersections in a well designed city, like Manhattan, where you can only move on the streets horizontally or vertically, as shown in the image below: -
![Manhattan Distance](https://upload.wikimedia.org/wikipedia/commons/thumb/0/08/Manhattan_distance.svg/220px-Manhattan_distance.svg.png)
+
+ Manhattan Distance +
This images show some of the smallest paths from one black point to the other, all of them with length $12$. @@ -77,7 +79,9 @@ Also, we may realize that $\alpha$ is a [spiral similarity](https://en.wikipedia Here's an image to help visualizing the transformation: -
![Chebyshev transformation](chebyshev-transformation.png)
+
+ Chebyshev transformation +
## Manhattan Minimum Spanning Tree diff --git a/src/geometry/minkowski.md b/src/geometry/minkowski.md index d3fd93d5d..f2661d867 100644 --- a/src/geometry/minkowski.md +++ b/src/geometry/minkowski.md @@ -41,7 +41,9 @@ We repeat the following steps while $i < |P|$ or $j < |Q|$. Here is a nice visualization, which may help you understand what is going on. -
![Visual](minkowski.gif)
+
+ Visual +
## Distance between two polygons One of the most common applications of Minkowski sum is computing the distance between two convex polygons (or simply checking whether they intersect). diff --git a/src/geometry/planar.md b/src/geometry/planar.md index 017399a7c..2cfc81282 100644 --- a/src/geometry/planar.md +++ b/src/geometry/planar.md @@ -53,7 +53,9 @@ It's quite clear that the complexity of the algorithm is $O(m \log m)$ because o At the first glance it may seem that finding faces of a disconnected graph is not much harder because we can run the same algorithm for each connected component. However, the components may be drawn in a nested way, forming **holes** (see the image below). In this case the inner face of some component becomes the outer face of some other components and has a complex disconnected border. Dealing with such cases is quite hard, one possible approach is to identify nested components with [point location](point-location.md) algorithms. -
![Planar graph with holes](planar_hole.png)
+
+ Planar graph with holes +
## Implementation The following implementation returns a vector of vertices for each face, outer face goes first. @@ -147,7 +149,9 @@ std::vector> find_faces(std::vector vertices, std::ve Sometimes you are not given a graph explicitly, but rather as a set of line segments on a plane, and the actual graph is formed by intersecting those segments, as shown in the picture below. In this case you have to build the graph manually. The easiest way to do so is as follows. Fix a segment and intersect it with all other segments. Then sort all intersection points together with the two endpoints of the segment lexicographically and add them to the graph as vertices. Also link each two adjacent vertices in lexicographical order by an edge. After doing this procedure for all edges we will obtain the graph. Of course, we should ensure that two equal intersection points will always correspond to the same vertex. The easiest way to do this is to store the points in a map by their coordinates, regarding points whose coordinates differ by a small number (say, less than $10^{-9}$) as equal. This algorithm works in $O(n^2 \log n)$. -
![Implicitly defined graph](planar_implicit.png)
+
+ Implicitly defined graph +
## Implementation ```{.cpp file=planar_implicit} diff --git a/src/geometry/point-location.md b/src/geometry/point-location.md index 10c0d3f51..c6d21b7f8 100644 --- a/src/geometry/point-location.md +++ b/src/geometry/point-location.md @@ -15,7 +15,9 @@ This problem may arise when you need to locate some points in a Voronoi diagram Firstly, for each query point $p\ (x_0, y_0)$ we want to find such an edge that if the point belongs to any edge, the point lies on the edge we found, otherwise this edge must intersect the line $x = x_0$ at some unique point $(x_0, y)$ where $y < y_0$ and this $y$ is maximum among all such edges. The following image shows both cases. -
![Image of Goal](point_location_goal.png)
+
+ Image of Goal +
We will solve this problem offline using the sweep line algorithm. Let's iterate over x-coordinates of query points and edges' endpoints in increasing order and keep a set of edges $s$. For each x-coordinate we will add some events beforehand. @@ -27,7 +29,9 @@ Finally, for each query point we will add one _get_ event for its x-coordinate. For each x-coordinate we will sort the events by their types in order (_vertical_, _get_, _remove_, _add_). The following image shows all events in sorted order for each x-coordinate. -
![Image of Events](point_location_events.png)
+
+ Image of Events +
We will keep two sets during the sweep-line process. A set $t$ for all non-vertical edges, and one set $vert$ especially for the vertical ones. diff --git a/src/geometry/tangents-to-two-circles.md b/src/geometry/tangents-to-two-circles.md index 4f4dfa4e9..546150a62 100644 --- a/src/geometry/tangents-to-two-circles.md +++ b/src/geometry/tangents-to-two-circles.md @@ -14,7 +14,9 @@ The described algorithm will also work in the case when one (or both) circles de ## The number of common tangents The number of common tangents to two circles can be **0,1,2,3,4** and **infinite**. Look at the images for different cases. -
!["Different cases of tangents common to two circles"](tangents-to-two-circles.png)
+
+ +
Here, we won't be considering **degenerate** cases, i.e *when the circles coincide (in this case they have infinitely many common tangents), or one circle lies inside the other (in this case they have no common tangents, or if the circles are tangent, there is one common tangent).* diff --git a/src/geometry/vertical_decomposition.md b/src/geometry/vertical_decomposition.md index 934db8044..96853994f 100644 --- a/src/geometry/vertical_decomposition.md +++ b/src/geometry/vertical_decomposition.md @@ -39,7 +39,9 @@ For simplicity we will show how to do this for an upper segment, the algorithm f Here is a graphic representation of the three cases. -
![Visual](triangle_union.png)
+
+ Visual +
Finally we should remark on processing all the additions of $1$ or $-1$ on all stripes in $[x_1, x_2]$. For each addition of $w$ on $[x_1, x_2]$ we can create events $(x_1, w),\ (x_2, -w)$ and process all these events with a sweep line. diff --git a/src/graph/2SAT.md b/src/graph/2SAT.md index b66316bbe..fbd1a6a35 100644 --- a/src/graph/2SAT.md +++ b/src/graph/2SAT.md @@ -39,7 +39,9 @@ b \Rightarrow a & \lnot b \Rightarrow \lnot a & b \Rightarrow \lnot a & c \Right You can see the implication graph in the following image: -
!["Implication Graph of 2-SAT example"](2SAT.png)
+
+ +
It is worth paying attention to the property of the implication graph: if there is an edge $a \Rightarrow b$, then there also is an edge $\lnot b \Rightarrow \lnot a$. @@ -59,7 +61,9 @@ The following image shows all strongly connected components for the example. As we can check easily, neither of the four components contain a vertex $x$ and its negation $\lnot x$, therefore the example has a solution. We will learn in the next paragraphs how to compute a valid assignment, but just for demonstration purposes the solution $a = \text{false}$, $b = \text{false}$, $c = \text{false}$ is given. -
!["Strongly Connected Components of the 2-SAT example"](2SAT_SCC.png)
+
+ +
Now we construct the algorithm for finding the solution of the 2-SAT problem on the assumption that the solution exists. diff --git a/src/graph/edmonds_karp.md b/src/graph/edmonds_karp.md index ef844f970..a86a475f9 100644 --- a/src/graph/edmonds_karp.md +++ b/src/graph/edmonds_karp.md @@ -43,7 +43,9 @@ The source $s$ is origin of all the water, and the water can only drain in the s The following image shows a flow network. The first value of each edge represents the flow, which is initially 0, and the second value represents the capacity. -
![Flow network](Flow1.png)
+
+ Flow network +
The value of the flow of a network is the sum of all the flows that get produced in the source $s$, or equivalently to the sum of all the flows that are consumed by the sink $t$. A **maximal flow** is a flow with the maximal possible value. @@ -53,7 +55,9 @@ In the visualization with water pipes, the problem can be formulated in the foll how much water can we push through the pipes from the source to the sink? The following image shows the maximal flow in the flow network. -
![Maximal flow](Flow9.png)
+
+ Maximal flow +
## Ford-Fulkerson method @@ -79,7 +83,9 @@ we update $f((u, v)) ~\text{+=}~ C$ and $f((v, u)) ~\text{-=}~ C$ for every edge Here is an example to demonstrate the method. We use the same flow network as above. Initially we start with a flow of 0. -
![Flow network](Flow1.png)
+
+ Flow network +
We can find the path $s - A - B - t$ with the residual capacities 7, 5, and 8. Their minimum is 5, therefore we can increase the flow along this path by 5. @@ -200,7 +206,9 @@ It says that the capacity of the maximum flow has to be equal to the capacity of In the following image, you can see the minimum cut of the flow network we used earlier. It shows that the capacity of the cut $\{s, A, D\}$ and $\{B, C, t\}$ is $5 + 3 + 2 = 10$, which is equal to the maximum flow that we found. Other cuts will have a bigger capacity, like the capacity between $\{s, A\}$ and $\{B, C, D, t\}$ is $4 + 3 + 5 = 12$. -
![Minimum cut](Cut.png)
+
+ Minimum cut +
A minimum cut can be found after performing a maximum flow computation using the Ford-Fulkerson method. One possible minimum cut is the following: diff --git a/src/graph/hld.md b/src/graph/hld.md index bd56798fd..36caf852e 100644 --- a/src/graph/hld.md +++ b/src/graph/hld.md @@ -53,7 +53,9 @@ Since we can move from one heavy path to another only through a light edge (each The following image illustrates the decomposition of a sample tree. The heavy edges are thicker than the light edges. The heavy paths are marked by dotted boundaries. -
![Image of HLD](hld.png)
+
+ Image of HLD +
## Sample problems diff --git a/src/graph/lca.md b/src/graph/lca.md index db36c18a0..f577d4b2f 100644 --- a/src/graph/lca.md +++ b/src/graph/lca.md @@ -30,7 +30,9 @@ So the $\text{LCA}(v_1, v_2)$ can be uniquely determined by finding the vertex w Let's illustrate this idea. Consider the following graph and the Euler tour with the corresponding heights: -
![LCA_Euler_Tour](LCA_Euler.png)
+
+ LCA_Euler_Tour +
$$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline diff --git a/src/graph/lca_farachcoltonbender.md b/src/graph/lca_farachcoltonbender.md index f67365511..506509359 100644 --- a/src/graph/lca_farachcoltonbender.md +++ b/src/graph/lca_farachcoltonbender.md @@ -22,7 +22,9 @@ The LCA of two nodes $u$ and $v$ is the node between the occurrences of $u$ and In the following picture you can see a possible Euler-Tour of a graph and in the list below you can see the visited nodes and their heights. -
![LCA_Euler_Tour](LCA_Euler.png)
+
+ LCA_Euler_Tour +
$$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline diff --git a/src/graph/rmq_linear.md b/src/graph/rmq_linear.md index 9ce17f341..ab0ae216a 100644 --- a/src/graph/rmq_linear.md +++ b/src/graph/rmq_linear.md @@ -24,7 +24,9 @@ The array `A` will be partitioned into 3 parts: the prefix of the array up to th The root of the tree will be a node corresponding to the minimum element of the array `A`, the left subtree will be the Cartesian tree of the prefix, and the right subtree will be a Cartesian tree of the suffix. In the following image you can see one array of length 10 and the corresponding Cartesian tree. -
![Image of Cartesian Tree](CartesianTree.png)
+
+ Image of Cartesian Tree +
The range minimum query `[l, r]` is equivalent to the lowest common ancestor query `[l', r']`, where `l'` is the node corresponding to the element `A[l]` and `r'` the node corresponding to the element `A[r]`. Indeed the node corresponding to the smallest element in the range has to be an ancestor of all nodes in the range, therefor also from `l'` and `r'`. @@ -33,7 +35,9 @@ And is also has to be the lowest ancestor, because otherwise `l'` and `r'` would In the following image you can see the LCA queries for the RMQ queries `[1, 3]` and `[5, 9]`. In the first query the LCA of the nodes `A[1]` and `A[3]` is the node corresponding to `A[2]` which has the value 2, and in the second query the LCA of `A[5]` and `A[9]` is the node corresponding to `A[8]` which has the value 3. -
![LCA queries in the Cartesian Tree](CartesianTreeLCA.png)
+
+ LCA queries in the Cartesian Tree +
Such a tree can be built in $O(N)$ time and the Farach-Colton and Benders algorithm can preprocess the tree in $O(N)$ and find the LCA in $O(1)$. diff --git a/src/graph/topological-sort.md b/src/graph/topological-sort.md index acfad5331..909e40652 100644 --- a/src/graph/topological-sort.md +++ b/src/graph/topological-sort.md @@ -20,9 +20,9 @@ Here is one given graph together with its topological order: Topological order can be **non-unique** (for example, if there exist three vertices $a$, $b$, $c$ for which there exist paths from $a$ to $b$ and from $a$ to $c$ but not paths from $b$ to $c$ or from $c$ to $b$). The example graph also has multiple topological orders, a second topological order is the following: -
-![second topological order](topological_3.png) -
+
+ second topological order +
A Topological order may **not exist** at all. It only exists, if the directed graph contains no cycles. diff --git a/src/others/stern_brocot_tree_farey_sequences.md b/src/others/stern_brocot_tree_farey_sequences.md index bb9dfd725..f725de53b 100644 --- a/src/others/stern_brocot_tree_farey_sequences.md +++ b/src/others/stern_brocot_tree_farey_sequences.md @@ -34,7 +34,9 @@ Continuing this process to infinity this covers *all* positive fractions. Additi Before proving these properties, let us actually show a visualization of the Stern-Brocot tree, rather than the list representation. Every fraction in the tree has two children. Each child is the mediant of the closest ancestor on the left and closest ancestor to the right. -
![Stern-Brocot tree](https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/SternBrocotTree.svg/1024px-SternBrocotTree.svg.png)
+
+ Stern-Brocot tree +
## Proofs diff --git a/src/others/tortoise_and_hare.md b/src/others/tortoise_and_hare.md index 9d2ba0a96..783185542 100644 --- a/src/others/tortoise_and_hare.md +++ b/src/others/tortoise_and_hare.md @@ -7,7 +7,9 @@ tags: Given a linked list where the starting point of that linked list is denoted by **head**, and there may or may not be a cycle present. For instance: -
!["Linked list with cycle"](tortoise_hare_algo.png)
+
+ +
Here we need to find out the point **C**, i.e the starting point of the cycle. @@ -27,7 +29,9 @@ So, it involved two steps: 6. If they point to any same node at any point of their journey, it would indicate that the cycle indeed exists in the linked list. 7. If we get null, it would indicate that the linked list has no cycle. -
!["Found cycle"](tortoise_hare_cycle_found.png)
+
+ +
Now, that we have figured out that there is a cycle present in the linked list, for the next step we need to find out the starting point of cycle, i.e., **C**. ### Step 2: Starting point of the cycle @@ -81,7 +85,9 @@ When the slow pointer has moved $k \cdot L$ steps, and the fast pointer has cove Lets try to calculate the distance covered by both of the pointers till they point they met within the cycle. -
!["Proof"](tortoise_hare_proof.png)
+
+ +
$slowDist = a + xL + b$ , $x\ge0$ From a1adc156765822f6f53c5d2cb432ae0818c00882 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Tue, 15 Apr 2025 20:31:59 -0400 Subject: [PATCH 298/324] Update k-th.md #1215 basics solved. --- src/sequences/k-th.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/sequences/k-th.md b/src/sequences/k-th.md index 02b0c0ec2..0f588c5cb 100644 --- a/src/sequences/k-th.md +++ b/src/sequences/k-th.md @@ -68,7 +68,7 @@ T order_statistics (std::vector a, unsigned n, unsigned k) ## Notes * The randomized algorithm above is named [quickselect](https://en.wikipedia.org/wiki/Quickselect). You should do random shuffle on $A$ before calling it or use a random element as a barrier for it to run properly. There are also deterministic algorithms that solve the specified problem in linear time, such as [median of medians](https://en.wikipedia.org/wiki/Median_of_medians). -* A deterministic linear solution is implemented in C++ standard library as [std::nth_element](https://en.cppreference.com/w/cpp/algorithm/nth_element). +* [std::nth_element](https://en.cppreference.com/w/cpp/algorithm/nth_element) solves this in C++ but gcc's implementation runs in worst case $O(n \log n )$ time. * Finding $K$ smallest elements can be reduced to finding $K$-th element with a linear overhead, as they're exactly the elements that are smaller than $K$-th. ## Practice Problems From a65db14a555810306ec6af024bc9f0afd2e0a60b Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Tue, 15 Apr 2025 20:46:24 -0400 Subject: [PATCH 299/324] Update finding-negative-cycle-in-graph.md Resolves 1419. --- src/graph/finding-negative-cycle-in-graph.md | 21 +++++++++----------- 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/src/graph/finding-negative-cycle-in-graph.md b/src/graph/finding-negative-cycle-in-graph.md index 7589c09db..a9b87c7f9 100644 --- a/src/graph/finding-negative-cycle-in-graph.md +++ b/src/graph/finding-negative-cycle-in-graph.md @@ -30,35 +30,33 @@ Do $N$ iterations of Bellman-Ford algorithm. If there were no changes on the las struct Edge { int a, b, cost; }; - -int n, m; + +int n; vector edges; const int INF = 1000000000; - + void solve() { - vector d(n, INF); + vector d(n, 0); vector p(n, -1); int x; - - d[0] = 0; - + for (int i = 0; i < n; ++i) { x = -1; for (Edge e : edges) { - if (d[e.a] < INF && d[e.a] + e.cost < d[e.b]) { + if (d[e.a] + e.cost < d[e.b]) { d[e.b] = max(-INF, d[e.a] + e.cost); p[e.b] = e.a; x = e.b; } } } - + if (x == -1) { cout << "No negative cycle found."; } else { for (int i = 0; i < n; ++i) x = p[x]; - + vector cycle; for (int v = x;; v = p[v]) { cycle.push_back(v); @@ -66,14 +64,13 @@ void solve() { break; } reverse(cycle.begin(), cycle.end()); - + cout << "Negative cycle: "; for (int v : cycle) cout << v << ' '; cout << endl; } } - ``` ## Using Floyd-Warshall algorithm From b21558de0edde5d4edb1954b565fc6e10b24f5ee Mon Sep 17 00:00:00 2001 From: Mrityunjai Singh Date: Wed, 16 Apr 2025 01:32:22 -0700 Subject: [PATCH 300/324] aho corasick text change --- src/string/aho_corasick.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/string/aho_corasick.md b/src/string/aho_corasick.md index d56ed5c1c..df0998713 100644 --- a/src/string/aho_corasick.md +++ b/src/string/aho_corasick.md @@ -209,7 +209,7 @@ Assume that at the moment we stand in a vertex $v$ and consider a character $c$. 1. $go[v][c] = -1$. In this case, we may assign $go[v][c] = go[u][c]$, which is already known by the induction hypothesis; 2. $go[v][c] = w \neq -1$. In this case, we may assign $link[w] = go[u][c]$. -In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(mk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $m$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(m \log k)$ algorithm. +In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(mk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $m$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(n \log k)$ algorithm. ## Applications From 4cdc4df2c108f77037a5e0b5e508de7548ec7c8d Mon Sep 17 00:00:00 2001 From: jxu <7989982+jxu@users.noreply.github.com> Date: Thu, 17 Apr 2025 13:52:46 -0400 Subject: [PATCH 301/324] Fibonacci: better motivation for matrix form --- src/algebra/fibonacci-numbers.md | 43 ++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index 22403419a..adc409d3e 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -116,9 +116,48 @@ In this way, we obtain a linear solution, $O(n)$ time, saving all the values pri ### Matrix form -It is easy to prove the following relation: +To go from $(F_n, F_{n-1})$ to $(F_{n+1}, F_n)$, we can express the linear recurrence as a 2x2 matrix multiplication: -$$\begin{pmatrix} 1 & 1 \cr 1 & 0 \cr\end{pmatrix} ^ n = \begin{pmatrix} F_{n+1} & F_{n} \cr F_{n} & F_{n-1} \cr\end{pmatrix}$$ +$$ +\begin{pmatrix} +1 & 1 \\ +1 & 0 +\end{pmatrix} +\begin{pmatrix} +F_n \\ +F_{n-1} +\end{pmatrix} += +\begin{pmatrix} +F_n + F_{n-1} \\ +F_{n} +\end{pmatrix} += +\begin{pmatrix} +F_{n+1} \\ +F_{n} +\end{pmatrix} +$$ + +This lets us treat iterating the recurrence as repeated matrix multiplication, which has nice properties. In particular, + +$$ +\begin{pmatrix} +1 & 1 \\ +1 & 0 +\end{pmatrix}^n +\begin{pmatrix} +F_1 \\ +F_0 +\end{pmatrix} += +\begin{pmatrix} +F_{n+1} \\ +F_{n} +\end{pmatrix} +$$ + +where $F_1 = 1, F_0 = 0$. Thus, in order to find $F_n$ in $O(log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md)) From 7520d3e539694f053614ed9e8e1fd3bf980338c7 Mon Sep 17 00:00:00 2001 From: jxu <7989982+jxu@users.noreply.github.com> Date: Thu, 17 Apr 2025 14:03:29 -0400 Subject: [PATCH 302/324] Fibonacci: Cassini's identity proof sketches --- src/algebra/fibonacci-numbers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index 22403419a..7c805a629 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -22,6 +22,8 @@ Fibonacci numbers possess a lot of interesting properties. Here are a few of the $$F_{n-1} F_{n+1} - F_n^2 = (-1)^n$$ +This can be proved by induction. A one-line proof due to Knuth comes from taking the determinant of the 2x2 matrix form below. + * The "addition" rule: $$F_{n+k} = F_k F_{n+1} + F_{k-1} F_n$$ From 375ca6b9368247b01a0722a8d184418c3c5bebe1 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 18 Apr 2025 20:42:14 +0200 Subject: [PATCH 303/324] log n -> \log n --- src/algebra/fibonacci-numbers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index adc409d3e..e63e52081 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -159,7 +159,7 @@ $$ where $F_1 = 1, F_0 = 0$. -Thus, in order to find $F_n$ in $O(log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md)) +Thus, in order to find $F_n$ in $O(\log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md)) ```{.cpp file=fibonacci_matrix} struct matrix { From 8c5bc69c37613441b49493974c9bfa7b04aa7deb Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Fri, 18 Apr 2025 23:47:45 +0200 Subject: [PATCH 304/324] m -> n in Aho-Corasick --- src/string/aho_corasick.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/string/aho_corasick.md b/src/string/aho_corasick.md index df0998713..36ec40c7d 100644 --- a/src/string/aho_corasick.md +++ b/src/string/aho_corasick.md @@ -209,7 +209,7 @@ Assume that at the moment we stand in a vertex $v$ and consider a character $c$. 1. $go[v][c] = -1$. In this case, we may assign $go[v][c] = go[u][c]$, which is already known by the induction hypothesis; 2. $go[v][c] = w \neq -1$. In this case, we may assign $link[w] = go[u][c]$. -In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(mk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $m$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(n \log k)$ algorithm. +In this way, we spend $O(1)$ time per each pair of a vertex and a character, making the running time $O(nk)$. The major overhead here is that we copy a lot of transitions from $u$ in the first case, while the transitions of the second case form the trie and sum up to $n$ over all vertices. To avoid the copying of $go[u][c]$, we may use a persistent array data structure, using which we initially copy $go[u]$ into $go[v]$ and then only update values for characters in which the transition would differ. This leads to the $O(n \log k)$ algorithm. ## Applications From 115cc22af2ba00e76f123f2a382afc44ff9710d4 Mon Sep 17 00:00:00 2001 From: jxu <7989982+jxu@users.noreply.github.com> Date: Fri, 18 Apr 2025 23:11:59 -0400 Subject: [PATCH 305/324] Fibonacci: restore matrix power form Maybe a dotted line would show the matrix [[F2, F1],[F1,F0]] can be viewed as two column vectors. Using only the matrix power saves one matrix-vector multiply. --- src/algebra/fibonacci-numbers.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index e63e52081..e1acfdcb0 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -157,7 +157,19 @@ F_{n} \end{pmatrix} $$ -where $F_1 = 1, F_0 = 0$. +where $F_1 = 1, F_0 = 0$. +In fact, since +$$ +\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} += \begin{pmatrix} F_2 & F_1 \\ F_1 & F_0 \end{pmatrix} +$$ + +we can use the matrix directly: + +$$ +\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n += \begin{pmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{pmatrix} +$$ Thus, in order to find $F_n$ in $O(\log n)$ time, we must raise the matrix to n. (See [Binary exponentiation](binary-exp.md)) From 59c31747c57735cfe83639f9a1dac64bb800dd78 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Sat, 19 Apr 2025 09:56:55 +0200 Subject: [PATCH 306/324] Update src/algebra/fibonacci-numbers.md --- src/algebra/fibonacci-numbers.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index e1acfdcb0..dd8e1bd1a 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -159,6 +159,7 @@ $$ where $F_1 = 1, F_0 = 0$. In fact, since + $$ \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} F_2 & F_1 \\ F_1 & F_0 \end{pmatrix} From 06d6a835e50e0d66de3c817a322eb292e69d23d5 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 21 Apr 2025 21:11:00 -0400 Subject: [PATCH 307/324] Update fibonacci-numbers.md Try indentation. --- src/algebra/fibonacci-numbers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/fibonacci-numbers.md b/src/algebra/fibonacci-numbers.md index 7c805a629..176a73600 100644 --- a/src/algebra/fibonacci-numbers.md +++ b/src/algebra/fibonacci-numbers.md @@ -22,7 +22,7 @@ Fibonacci numbers possess a lot of interesting properties. Here are a few of the $$F_{n-1} F_{n+1} - F_n^2 = (-1)^n$$ -This can be proved by induction. A one-line proof due to Knuth comes from taking the determinant of the 2x2 matrix form below. +>This can be proved by induction. A one-line proof by Knuth comes from taking the determinant of the 2x2 matrix form below. * The "addition" rule: From 3de210a3040f8b2dab5cf64286f3c28e02747291 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 21 Apr 2025 22:11:52 -0400 Subject: [PATCH 308/324] updated script to account for when text follows center tag --- src/geometry/manhattan-distance.md | 28 ++++++++++++++++------------ src/graph/edmonds_karp.md | 20 ++++++++++++++++---- src/graph/mst_prim.md | 5 ++++- src/graph/second_best_mst.md | 6 ++++-- src/graph/topological-sort.md | 8 ++++---- 5 files changed, 44 insertions(+), 23 deletions(-) diff --git a/src/geometry/manhattan-distance.md b/src/geometry/manhattan-distance.md index 4c725626e..87514868a 100644 --- a/src/geometry/manhattan-distance.md +++ b/src/geometry/manhattan-distance.md @@ -88,17 +88,19 @@ Here's an image to help visualizing the transformation: The Manhattan MST problem consists of, given some points in the plane, find the edges that connect all the points and have a minimum total sum of weights. The weight of an edge that connects two points is their Manhattan distance. For simplicity, we assume that all points have different locations. Here we show a way of finding the MST in $O(n \log{n})$ by finding for each point its nearest neighbor in each octant, as represented by the image below. This will give us $O(n)$ candidate edges, which, as we show below, will guarantee that they contain the MST. The final step is then using some standard MST, for example, [Kruskal algorithm using disjoint set union](https://cp-algorithms.com/graph/mst_kruskal_with_dsu.html). -
![8 octants picture](manhattan-mst-octants.png) - -*The 8 octants relative to a point S*
+
+ 8 octants picture + *The 8 octants relative to a point S* +
The algorithm shown here was first presented in a paper from [H. Zhou, N. Shenoy, and W. Nichollos (2002)](https://ieeexplore.ieee.org/document/913303). There is also another know algorithm that uses a Divide and conquer approach by [J. Stolfi](https://www.academia.edu/15667173/On_computing_all_north_east_nearest_neighbors_in_the_L1_metric), which is also very interesting and only differ in the way they find the nearest neighbor in each octant. They both have the same complexity, but the one presented here is easier to implement and has a lower constant factor. First, let's understand why it is enough to consider only the nearest neighbor in each octant. The idea is to show that for a point $s$ and any two other points $p$ and $q$ in the same octant, $d(p, q) < \max(d(s, p), d(s, q))$. This is important, because it shows that if there was a MST where $s$ is connected to both $p$ and $q$, we could erase one of these edges and add the edge $(p,q)$, which would decrease the total cost. To prove this, we assume without loss of generality that $p$ and $q$ are in the octanct $R_1$, which is defined by: $x_s \leq x$ and $x_s - y_s > x - y$, and then do some casework. The image below give some intuition on why this is true. -
![unique nearest neighbor](manhattan-mst-uniqueness.png) - -*Intuitively, the limitation of the octant makes it impossible that $p$ and $q$ are both closer to $s$ than to each other*
+
+ unique nearest neighbor + *Intuitively, the limitation of the octant makes it impossible that $p$ and $q$ are both closer to $s$ than to each other* +
Therefore, the main question is how to find the nearest neighbor in each octant for every single of the $n$ points. @@ -109,13 +111,15 @@ For simplicity we focus on the NNE octant ($R_1$ in the image above). All other We will use a sweep-line approach. We process the points from south-west to north-east, that is, by non-decreasing $x + y$. We also keep a set of points which don't have their nearest neighbor yet, which we call "active set". We add the images below to help visualize the algorithm. -
![manhattan-mst-sweep](manhattan-mst-sweep-line-1.png) - -*In black with an arrow you can see the direction of the line-sweep. All the points below this lines are in the active set, and the points above are still not processed. In green we see the points which are in the octant of the processed point. In red the points that are not in the searched octant.*
- -
![manhattan-mst-sweep](manhattan-mst-sweep-line-2.png) +
+ manhattan-mst-sweep + *In black with an arrow you can see the direction of the line-sweep. All the points below this lines are in the active set, and the points above are still not processed. In green we see the points which are in the octant of the processed point. In red the points that are not in the searched octant.* +
-*In this image we see the active set after processing the point $p$. Note that the $2$ green points of the previous image had $p$ in its north-north-east octant and are not in the active set anymore, because they already found their nearest neighbor.*
+
+ manhattan-mst-sweep + *In this image we see the active set after processing the point $p$. Note that the $2$ green points of the previous image had $p$ in its north-north-east octant and are not in the active set anymore, because they already found their nearest neighbor.* +
When we add a new point point $p$, for every point $s$ that has it in its octant we can safely assign $p$ as the nearest neighbor. This is true because their distance is $d(p,s) = |x_p - x_s| + |y_p - y_s| = (x_p + y_p) - (x_s + y_s)$, because $p$ is in the north-north-east octant. As all the next points will not have a smaller value of $x + y$ because of the sorting step, $p$ is guaranteed to have the smaller distance. We can then remove all such points from the active set, and finally add $p$ to the active set. diff --git a/src/graph/edmonds_karp.md b/src/graph/edmonds_karp.md index a86a475f9..c1f394dce 100644 --- a/src/graph/edmonds_karp.md +++ b/src/graph/edmonds_karp.md @@ -90,14 +90,23 @@ Initially we start with a flow of 0. We can find the path $s - A - B - t$ with the residual capacities 7, 5, and 8. Their minimum is 5, therefore we can increase the flow along this path by 5. This gives a flow of 5 for the network. -
![First path](Flow2.png) ![Network after first path](Flow3.png)
+
+ First path + ![Network after first path](Flow3.png) +
Again we look for an augmenting path, this time we find $s - D - A - C - t$ with the residual capacities 4, 3, 3, and 5. Therefore we can increase the flow by 3 and we get a flow of 8 for the network. -
![Second path](Flow4.png) ![Network after second path](Flow5.png)
+
+ Second path + ![Network after second path](Flow5.png) +
This time we find the path $s - D - C - B - t$ with the residual capacities 1, 2, 3, and 3, and hence, we increase the flow by 1. -
![Third path](Flow6.png) ![Network after third path](Flow7.png)
+
+ Third path + ![Network after third path](Flow7.png) +
This time we find the augmenting path $s - A - D - C - t$ with the residual capacities 2, 3, 1, and 2. We can increase the flow by 1. @@ -107,7 +116,10 @@ In the original flow network, we are not allowed to send any flow from $A$ to $D But because we already have a flow of 3 from $D$ to $A$, this is possible. The intuition of it is the following: Instead of sending a flow of 3 from $D$ to $A$, we only send 2 and compensate this by sending an additional flow of 1 from $s$ to $A$, which allows us to send an additional flow of 1 along the path $D - C - t$. -
![Fourth path](Flow8.png) ![Network after fourth path](Flow9.png)
+
+ Fourth path + ![Network after fourth path](Flow9.png) +
Now, it is impossible to find an augmenting path between $s$ and $t$, therefore this flow of $10$ is the maximal possible. We have found the maximal flow. diff --git a/src/graph/mst_prim.md b/src/graph/mst_prim.md index d8c3789db..9f7eb48c8 100644 --- a/src/graph/mst_prim.md +++ b/src/graph/mst_prim.md @@ -13,7 +13,10 @@ The spanning tree with the least weight is called a minimum spanning tree. In the left image you can see a weighted undirected graph, and in the right image you can see the corresponding minimum spanning tree. -
![Random graph](MST_before.png) ![MST of this graph](MST_after.png)
+
+ Random graph + ![MST of this graph](MST_after.png) +
It is easy to see that any spanning tree will necessarily contain $n-1$ edges. diff --git a/src/graph/second_best_mst.md b/src/graph/second_best_mst.md index 5e9c82246..075f0dd39 100644 --- a/src/graph/second_best_mst.md +++ b/src/graph/second_best_mst.md @@ -53,10 +53,12 @@ The final time complexity of this approach is $O(E \log V)$. For example: -
![MST](second_best_mst_1.png) ![Second best MST](second_best_mst_2.png)
+
+ MST + ![Second best MST](second_best_mst_2.png)
*In the image left is the MST and right is the second best MST.* -
+ In the given graph suppose we root the MST at the blue vertex on the top, and then run our algorithm by start picking the edges not in MST. diff --git a/src/graph/topological-sort.md b/src/graph/topological-sort.md index 909e40652..262189a42 100644 --- a/src/graph/topological-sort.md +++ b/src/graph/topological-sort.md @@ -13,10 +13,10 @@ In other words, you want to find a permutation of the vertices (**topological or Here is one given graph together with its topological order: -
-![example directed graph](topological_1.png) -![one topological order](topological_2.png) -
+
+ example directed graph + ![one topological order](topological_2.png) +
Topological order can be **non-unique** (for example, if there exist three vertices $a$, $b$, $c$ for which there exist paths from $a$ to $b$ and from $a$ to $c$ but not paths from $b$ to $c$ or from $c$ to $b$). The example graph also has multiple topological orders, a second topological order is the following: From 9f5502ad2af0b6320be0ae8b7e27cf32c5b2d6bc Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 21 Apr 2025 22:22:24 -0400 Subject: [PATCH 309/324] change 2 images rather than image then text --- src/graph/mst_prim.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/graph/mst_prim.md b/src/graph/mst_prim.md index 9f7eb48c8..21649f28d 100644 --- a/src/graph/mst_prim.md +++ b/src/graph/mst_prim.md @@ -15,7 +15,7 @@ In the left image you can see a weighted undirected graph, and in the right imag
Random graph - ![MST of this graph](MST_after.png) + MST of this graph
It is easy to see that any spanning tree will necessarily contain $n-1$ edges. From fc8d5dc718a348a91a3d86e35ea8c64e8df47f00 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 21 Apr 2025 22:26:46 -0400 Subject: [PATCH 310/324] formatted multiple images in center tag --- src/graph/second_best_mst.md | 3 ++- src/graph/topological-sort.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/src/graph/second_best_mst.md b/src/graph/second_best_mst.md index 075f0dd39..3a68ee34a 100644 --- a/src/graph/second_best_mst.md +++ b/src/graph/second_best_mst.md @@ -55,7 +55,8 @@ For example:
MST - ![Second best MST](second_best_mst_2.png)
+ Second best MST +
*In the image left is the MST and right is the second best MST.*
diff --git a/src/graph/topological-sort.md b/src/graph/topological-sort.md index 262189a42..c522039bc 100644 --- a/src/graph/topological-sort.md +++ b/src/graph/topological-sort.md @@ -15,7 +15,7 @@ Here is one given graph together with its topological order:
example directed graph - ![one topological order](topological_2.png) + one topological order
Topological order can be **non-unique** (for example, if there exist three vertices $a$, $b$, $c$ for which there exist paths from $a$ to $b$ and from $a$ to $c$ but not paths from $b$ to $c$ or from $c$ to $b$). From f46a230f922fa344a7c5cce28292835ec315a454 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Mon, 21 Apr 2025 22:45:52 -0400 Subject: [PATCH 311/324] double images not working in edmonds karp --- src/graph/edmonds_karp.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/graph/edmonds_karp.md b/src/graph/edmonds_karp.md index c1f394dce..bab54772f 100644 --- a/src/graph/edmonds_karp.md +++ b/src/graph/edmonds_karp.md @@ -92,20 +92,20 @@ Their minimum is 5, therefore we can increase the flow along this path by 5. This gives a flow of 5 for the network.
First path - ![Network after first path](Flow3.png) + Network after first path
Again we look for an augmenting path, this time we find $s - D - A - C - t$ with the residual capacities 4, 3, 3, and 5. Therefore we can increase the flow by 3 and we get a flow of 8 for the network.
Second path - ![Network after second path](Flow5.png) + Network after second path
This time we find the path $s - D - C - B - t$ with the residual capacities 1, 2, 3, and 3, and hence, we increase the flow by 1.
Third path - ![Network after third path](Flow7.png) + Network after third path
This time we find the augmenting path $s - A - D - C - t$ with the residual capacities 2, 3, 1, and 2. @@ -118,7 +118,7 @@ The intuition of it is the following: Instead of sending a flow of 3 from $D$ to $A$, we only send 2 and compensate this by sending an additional flow of 1 from $s$ to $A$, which allows us to send an additional flow of 1 along the path $D - C - t$.
Fourth path - ![Network after fourth path](Flow9.png) + Network after fourth path
Now, it is impossible to find an augmenting path between $s$ and $t$, therefore this flow of $10$ is the maximal possible. From bee2358e0ea051328e6340babeeb57d1bd719c29 Mon Sep 17 00:00:00 2001 From: Proxihox Date: Thu, 1 May 2025 14:57:31 +0530 Subject: [PATCH 312/324] final fixes --- README.md | 2 +- src/num_methods/simulated_annealing.md | 16 +++++++--------- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 87fce368e..4bd8eacfc 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith ### New articles -- (19 October 2024) [Simulated Annealing](https://cp-algorithms.com/num_methods/simulated_annealing.html) +- (1 May 2025) [Simulated Annealing](https://cp-algorithms.com/num_methods/simulated_annealing.html) - (12 July 2024) [Manhattan distance](https://cp-algorithms.com/geometry/manhattan-distance.html) - (8 June 2024) [Knapsack Problem](https://cp-algorithms.com/dynamic_programming/knapsack.html) - (28 January 2024) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) diff --git a/src/num_methods/simulated_annealing.md b/src/num_methods/simulated_annealing.md index f9bd1686d..054f71fa4 100644 --- a/src/num_methods/simulated_annealing.md +++ b/src/num_methods/simulated_annealing.md @@ -16,7 +16,7 @@ We are given a function $E(s)$, which calculates the energy of the state $s$. We You are given a set of nodes in 2 dimensional space. Each node is characterised by its $x$ and $y$ coordinates. Your task is to find an ordering of the nodes, which will minimise the distance to be travelled when visiting these nodes in that order. ## Motivation -Annealing is a metallurgical process , wherein a material is heated up and allowed to cool, in order to allow the atoms inside to rearrange themselves in an arrangement with minimal internal energy, which in turn causes the material to have different properties. The state is the arrangement of atoms and the internal energy is the function being minimised. We can think of the original state of the atoms, as a local minima for its internal energy. To make the material rearrange its atoms, we need to motivate it to go across a region where its internal energy is not minimised in order to reach the global minima. This motivation is given by heating the material to a higher temperature. +Annealing is a metallurgical process, wherein a material is heated up and allowed to cool, in order to allow the atoms inside to rearrange themselves in an arrangement with minimal internal energy, which in turn causes the material to have different properties. The state is the arrangement of atoms and the internal energy is the function being minimised. We can think of the original state of the atoms, as a local minima for its internal energy. To make the material rearrange its atoms, we need to motivate it to go across a region where its internal energy is not minimised in order to reach the global minima. This motivation is given by heating the material to a higher temperature. Simulated annealing, literally, simulates this process. We start off with some random state (material) and set a high temperature (heat it up). Now, the algorithm is ready to accept states which have a higher energy than the current state, as it is motivated by the high temperature. This prevents the algorithm from getting stuck inside local minimas and move towards the global minima. As time progresses, the algorithm cools down and refuses the states with higher energy and moves into the closest minima it has found. @@ -26,7 +26,7 @@ $E(s)$ is the function which needs to be minimised (or maximised). It maps every ### State -The state space is the domain of the energy function, E(s), and a state is any element which belongs to the state space. In the case of TSP, all possible paths that we can take to visit all the nodes is the state space, and any single one of these paths can be considered as a state. +The state space is the domain of the energy function, $E(s)$, and a state is any element which belongs to the state space. In the case of TSP, all possible paths that we can take to visit all the nodes is the state space, and any single one of these paths can be considered as a state. ### Neighbouring state @@ -41,12 +41,11 @@ At the same time we also keep a track of the best state $s_{best}$ across all it

-A visual representation of simulated annealing, searching for the maxima of this function with multiple local maxima.. +A visual representation of simulated annealing, searching for the maxima of this function with multiple local maxima.
-This animation by [Kingpin13](https://commons.wikimedia.org/wiki/User:Kingpin13) is distributed under CC0 1.0 license.
-### Temperature($T$) and decay($u$) +### Temperature(T) and decay(u) The temperature of the system quantifies the willingness of the algorithm to accept a state with a higher energy. The decay is a constant which quantifies the "cooling rate" of the algorithm. A slow cooling rate (larger $u$) is known to give better results. @@ -107,8 +106,8 @@ pair simAnneal() { best = s; E_best = E_next; } + E = E_next; } - E = E_next; T *= u; } return {E_best, best}; @@ -158,7 +157,6 @@ class state { double E() { double dist = 0; - bool first = true; int n = points.size(); for (int i = 0;i < n; i++) dist += euclidean(points[i], points[(i+1)%n]); @@ -187,8 +185,8 @@ int main() { - The effect of the difference in energies, $E_{next} - E$, on the PAF can be increased/decreased by increasing/decreasing the base of the exponent as shown below: ```cpp bool P(double E, double E_next, double T, mt19937 rng) { - e = 2 // set e to any real number greater than 1 - double prob = exp(-(E_next-E)/T); + double e = 2 // set e to any real number greater than 1 + double prob = pow(e,-(E_next-E)/T); if (prob > 1) return true; else { From 46e4efa5180bfa059694dec086269806832105a7 Mon Sep 17 00:00:00 2001 From: Shashank Sahu <52148284+bit-shashank@users.noreply.github.com> Date: Sat, 3 May 2025 00:07:17 +0530 Subject: [PATCH 313/324] Typo fix in graph/fixed_length_paths.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Replaced It is obvious that the constructed adjacency matrix if the answer to the problem for the case   $k = 1$ . It contains the number of paths of length   $1$  between each pair of vertices. To It is obvious that the constructed adjacency matrix is the answer to the problem for the case   $k = 1$ . It contains the number of paths of length   $1$  between each pair of vertices. --- src/graph/fixed_length_paths.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/graph/fixed_length_paths.md b/src/graph/fixed_length_paths.md index e5ecf76bc..9e1c1efad 100644 --- a/src/graph/fixed_length_paths.md +++ b/src/graph/fixed_length_paths.md @@ -21,7 +21,7 @@ The following algorithm works also in the case of multiple edges: if some pair of vertices $(i, j)$ is connected with $m$ edges, then we can record this in the adjacency matrix by setting $G[i][j] = m$. Also the algorithm works if the graph contains loops (a loop is an edge that connect a vertex with itself). -It is obvious that the constructed adjacency matrix if the answer to the problem for the case $k = 1$. +It is obvious that the constructed adjacency matrix is the answer to the problem for the case $k = 1$. It contains the number of paths of length $1$ between each pair of vertices. We will build the solution iteratively: From 4611aee0bdc668908f5def6a51228b6a0ed51619 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Wed, 7 May 2025 17:57:39 -0400 Subject: [PATCH 314/324] Update CONTRIBUTING.md --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e1d3f2be1..fb6e7afee 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -19,7 +19,7 @@ Follow these steps to start contributing: 4. **Preview your changes** using the [preview page](preview.md) to ensure they look correct. 5. **Commit your changes** by clicking the _Propose changes_ button. 6. **Create a Pull Request (PR)** by clicking _Compare & pull request_. -7. **Review process**: Someone from the core team will review your changes. This may take a few hours to a few days. +7. **Review process**: Someone from the core team will review your changes. This may take a few days to a few weeks. ### Making Larger Changes From 27e90b5f3b84e58577f411d56a1e1582deb62850 Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Thu, 15 May 2025 10:18:15 +0530 Subject: [PATCH 315/324] Update factorization.md [Update Powersmooth Definition] The definition of powersmooth was a bit confusing so I added a formal definition inspired by a number theory book. --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 11bf4049c..51f206482 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -160,7 +160,7 @@ By looking at the squares $a^2$ modulo a fixed small number, it can be observed ## Pollard's $p - 1$ method { data-toc-label="Pollard's method" } It is very likely that at least one factor of a number is $B$**-powersmooth** for small $B$. -$B$-powersmooth means that every prime power $d^k$ that divides $p-1$ is at most $B$. +$B$-powersmooth means that every prime power $d^k$ that divides $p-1$ is at most $B$. Formally, let $\mathrm{B} \geqslant 1$ and $n \geqslant 1$ with prime factorization $n = \prod {p_i}^{e_i},$ then $n$ is $\mathrm{B}$-powersmooth if, for all $i,$ ${p_i}^{e_i} \leqslant \mathrm{B}$. E.g. the prime factorization of $4817191$ is $1303 \cdot 3697$. And the factors are $31$-powersmooth and $16$-powersmooth respectably, because $1303 - 1 = 2 \cdot 3 \cdot 7 \cdot 31$ and $3697 - 1 = 2^4 \cdot 3 \cdot 7 \cdot 11$. In 1974 John Pollard invented a method to extracts $B$-powersmooth factors from a composite number. From 6ec2fe6634b6e2bd4d879d08dfdf3bbcf87d0d6b Mon Sep 17 00:00:00 2001 From: 100daysummer <138024460+100daysummer@users.noreply.github.com> Date: Wed, 21 May 2025 12:00:17 +0300 Subject: [PATCH 316/324] Update longest_increasing_subsequence.md Small improvement of the wording of a question --- src/sequences/longest_increasing_subsequence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/sequences/longest_increasing_subsequence.md b/src/sequences/longest_increasing_subsequence.md index a4c8f0fe4..dca4f020b 100644 --- a/src/sequences/longest_increasing_subsequence.md +++ b/src/sequences/longest_increasing_subsequence.md @@ -174,7 +174,7 @@ We will again gradually process the numbers, first $a[0]$, then $a[1]$, etc, and $$ When we process $a[i]$, we can ask ourselves. -What have the conditions to be, that we write the current number $a[i]$ into the $d[0 \dots n]$ array? +Under what conditions should we write the current number $a[i]$ into the $d[0 \dots n]$ array? We set $d[l] = a[i]$, if there is a longest increasing sequence of length $l$ that ends in $a[i]$, and there is no longest increasing sequence of length $l$ that ends in a smaller number. Similar to the previous approach, if we remove the number $a[i]$ from the longest increasing sequence of length $l$, we get another longest increasing sequence of length $l -1$. From d1cfad25305a0047c822116f4369bdb33d157364 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Wed, 21 May 2025 13:11:00 +0200 Subject: [PATCH 317/324] semicolon after e --- src/num_methods/simulated_annealing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/num_methods/simulated_annealing.md b/src/num_methods/simulated_annealing.md index 054f71fa4..bd0dd6ed2 100644 --- a/src/num_methods/simulated_annealing.md +++ b/src/num_methods/simulated_annealing.md @@ -185,7 +185,7 @@ int main() { - The effect of the difference in energies, $E_{next} - E$, on the PAF can be increased/decreased by increasing/decreasing the base of the exponent as shown below: ```cpp bool P(double E, double E_next, double T, mt19937 rng) { - double e = 2 // set e to any real number greater than 1 + double e = 2; // set e to any real number greater than 1 double prob = pow(e,-(E_next-E)/T); if (prob > 1) return true; From a82cd128a8f3040c8e2658c317f853f9d0ec2d8d Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Wed, 21 May 2025 13:11:36 +0200 Subject: [PATCH 318/324] update date --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4bd8eacfc..513e55f32 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith ### New articles -- (1 May 2025) [Simulated Annealing](https://cp-algorithms.com/num_methods/simulated_annealing.html) +- (21 May 2025) [Simulated Annealing](https://cp-algorithms.com/num_methods/simulated_annealing.html) - (12 July 2024) [Manhattan distance](https://cp-algorithms.com/geometry/manhattan-distance.html) - (8 June 2024) [Knapsack Problem](https://cp-algorithms.com/dynamic_programming/knapsack.html) - (28 January 2024) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) From 422fb19bdd7c39a56d8fb51caefa419251b37fab Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Wed, 21 May 2025 17:33:44 +0530 Subject: [PATCH 319/324] Update factorization.md [Update powersmooth definition with suggestions] Commas are now outside $...$ and definiton is for (p - 1) for consistency. --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 51f206482..997a11be2 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -160,7 +160,7 @@ By looking at the squares $a^2$ modulo a fixed small number, it can be observed ## Pollard's $p - 1$ method { data-toc-label="Pollard's method" } It is very likely that at least one factor of a number is $B$**-powersmooth** for small $B$. -$B$-powersmooth means that every prime power $d^k$ that divides $p-1$ is at most $B$. Formally, let $\mathrm{B} \geqslant 1$ and $n \geqslant 1$ with prime factorization $n = \prod {p_i}^{e_i},$ then $n$ is $\mathrm{B}$-powersmooth if, for all $i,$ ${p_i}^{e_i} \leqslant \mathrm{B}$. +$B$-powersmooth means that every prime power $d^k$ that divides $p-1$ is at most $B$. Formally, let $\mathrm{B} \geqslant 1$ and let $p$ be a prime such that $(p - 1) \geqslant 1$. Suppose the prime factorization of $(p - 1)$ is $(p - 1) = \prod {q_i}^{e_i}$, where each $q_i$ is a prime and $e_i \geqslant 1$ then $(p - 1)$ is $\mathrm{B}$-powersmooth if, for all $i$, ${q_i}^{e_i} \leqslant \mathrm{B}$. E.g. the prime factorization of $4817191$ is $1303 \cdot 3697$. And the factors are $31$-powersmooth and $16$-powersmooth respectably, because $1303 - 1 = 2 \cdot 3 \cdot 7 \cdot 31$ and $3697 - 1 = 2^4 \cdot 3 \cdot 7 \cdot 11$. In 1974 John Pollard invented a method to extracts $B$-powersmooth factors from a composite number. From 1a7db31f720455b647a929596ff70936875f7532 Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Thu, 22 May 2025 09:24:40 +0530 Subject: [PATCH 320/324] Update factorization.md [Update Pollard's (p - 1) Method] Updated materials related to powersmoothness. Corrected some minor mistakes. --- src/algebra/factorization.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 997a11be2..84bdd356d 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -159,11 +159,10 @@ By looking at the squares $a^2$ modulo a fixed small number, it can be observed ## Pollard's $p - 1$ method { data-toc-label="Pollard's method" } -It is very likely that at least one factor of a number is $B$**-powersmooth** for small $B$. -$B$-powersmooth means that every prime power $d^k$ that divides $p-1$ is at most $B$. Formally, let $\mathrm{B} \geqslant 1$ and let $p$ be a prime such that $(p - 1) \geqslant 1$. Suppose the prime factorization of $(p - 1)$ is $(p - 1) = \prod {q_i}^{e_i}$, where each $q_i$ is a prime and $e_i \geqslant 1$ then $(p - 1)$ is $\mathrm{B}$-powersmooth if, for all $i$, ${q_i}^{e_i} \leqslant \mathrm{B}$. +It is very likely that a number $n$ has at least one prime factor $p$ such that $p - 1$ is $\mathrm{B}$**-powersmooth** for small $\mathrm{B}$. An integer $m$ is said to be $\mathrm{B}$-powersmooth if every prime power dividing $m$ is at most $\mathrm{B}$. Formally, let $\mathrm{B} \geqslant 1$ and let $m$ be any positive integer. Suppose the prime factorization of $m$ is $m = \prod {q_i}^{e_i}$, where each $q_i$ is a prime and $e_i \geqslant 1$. Then $m$ is $\mathrm{B}$-powersmooth if, for all $i$, ${q_i}^{e_i} \leqslant \mathrm{B}$. E.g. the prime factorization of $4817191$ is $1303 \cdot 3697$. -And the factors are $31$-powersmooth and $16$-powersmooth respectably, because $1303 - 1 = 2 \cdot 3 \cdot 7 \cdot 31$ and $3697 - 1 = 2^4 \cdot 3 \cdot 7 \cdot 11$. -In 1974 John Pollard invented a method to extracts $B$-powersmooth factors from a composite number. +And the values, $1303 - 1$ and $3697 - 1$, are $31$-powersmooth and $16$-powersmooth respectively, because $1303 - 1 = 2 \cdot 3 \cdot 7 \cdot 31$ and $3697 - 1 = 2^4 \cdot 3 \cdot 7 \cdot 11$. +In 1974 John Pollard invented a method to extracts $\mathrm{B}$-powersmooth factors from a composite number. The idea comes from [Fermat's little theorem](phi-function.md#application). Let a factorization of $n$ be $n = p \cdot q$. @@ -180,7 +179,7 @@ This means that $a^M - 1 = p \cdot r$, and because of that also $p ~|~ \gcd(a^M Therefore, if $p - 1$ for a factor $p$ of $n$ divides $M$, we can extract a factor using [Euclid's algorithm](euclid-algorithm.md). -It is clear, that the smallest $M$ that is a multiple of every $B$-powersmooth number is $\text{lcm}(1,~2~,3~,4~,~\dots,~B)$. +It is clear, that the smallest $M$ that is a multiple of every $\mathrm{B}$-powersmooth number is $\text{lcm}(1,~2~,3~,4~,~\dots,~B)$. Or alternatively: $$M = \prod_{\text{prime } q \le B} q^{\lfloor \log_q B \rfloor}$$ @@ -189,11 +188,11 @@ Notice, if $p-1$ divides $M$ for all prime factors $p$ of $n$, then $\gcd(a^M - In this case we don't receive a factor. Therefore, we will try to perform the $\gcd$ multiple times, while we compute $M$. -Some composite numbers don't have $B$-powersmooth factors for small $B$. +Some composite numbers don't have $\mathrm{B}$-powersmooth factors for small $\mathrm{B}$. For example, the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. We will have to choose $B >= 190~753$ to factorize the number. -In the following implementation we start with $B = 10$ and increase $B$ after each each iteration. +In the following implementation we start with $\mathrm{B} = 10$ and increase $\mathrm{B}$ after each each iteration. ```{.cpp file=factorization_p_minus_1} long long pollards_p_minus_1(long long n) { From bb7e13757ec14e867b3cd4de3da8966d87417270 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Sat, 24 May 2025 20:58:13 +0200 Subject: [PATCH 321/324] fix #1372 --- src/string/manacher.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/string/manacher.md b/src/string/manacher.md index 0c8bd5928..26589b83c 100644 --- a/src/string/manacher.md +++ b/src/string/manacher.md @@ -147,7 +147,7 @@ vector manacher_odd(string s) { vector p(n + 2); int l = 0, r = 1; for(int i = 1; i <= n; i++) { - p[i] = max(0, min(r - i, p[l + (r - i)])); + p[i] = min(r - i, p[l + (r - i)]); while(s[i - p[i]] == s[i + p[i]]) { p[i]++; } From 2597e0558304678a7fa7f92c66a19f9f7913c974 Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Thu, 29 May 2025 11:19:43 +0530 Subject: [PATCH 322/324] Update src/algebra/factorization.md Co-authored-by: Oleksandr Kulkov --- src/algebra/factorization.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 84bdd356d..bc606607f 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -188,8 +188,8 @@ Notice, if $p-1$ divides $M$ for all prime factors $p$ of $n$, then $\gcd(a^M - In this case we don't receive a factor. Therefore, we will try to perform the $\gcd$ multiple times, while we compute $M$. -Some composite numbers don't have $\mathrm{B}$-powersmooth factors for small $\mathrm{B}$. -For example, the factors of the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth. +Some composite numbers don't have factors $p$ s.t. $p-1$ is $\mathrm{B}$-powersmooth for small $\mathrm{B}$. +For example, for the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$, values $p-1$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth correspondingly. We will have to choose $B >= 190~753$ to factorize the number. In the following implementation we start with $\mathrm{B} = 10$ and increase $\mathrm{B}$ after each each iteration. From 54fec62526c6454ecd43b38544c6a56880031544 Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Thu, 29 May 2025 11:19:54 +0530 Subject: [PATCH 323/324] Update src/algebra/factorization.md Co-authored-by: Oleksandr Kulkov --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index bc606607f..9d9ab7ed7 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -162,7 +162,7 @@ By looking at the squares $a^2$ modulo a fixed small number, it can be observed It is very likely that a number $n$ has at least one prime factor $p$ such that $p - 1$ is $\mathrm{B}$**-powersmooth** for small $\mathrm{B}$. An integer $m$ is said to be $\mathrm{B}$-powersmooth if every prime power dividing $m$ is at most $\mathrm{B}$. Formally, let $\mathrm{B} \geqslant 1$ and let $m$ be any positive integer. Suppose the prime factorization of $m$ is $m = \prod {q_i}^{e_i}$, where each $q_i$ is a prime and $e_i \geqslant 1$. Then $m$ is $\mathrm{B}$-powersmooth if, for all $i$, ${q_i}^{e_i} \leqslant \mathrm{B}$. E.g. the prime factorization of $4817191$ is $1303 \cdot 3697$. And the values, $1303 - 1$ and $3697 - 1$, are $31$-powersmooth and $16$-powersmooth respectively, because $1303 - 1 = 2 \cdot 3 \cdot 7 \cdot 31$ and $3697 - 1 = 2^4 \cdot 3 \cdot 7 \cdot 11$. -In 1974 John Pollard invented a method to extracts $\mathrm{B}$-powersmooth factors from a composite number. +In 1974 John Pollard invented a method to extract factors $p$, s.t. $p-1$ is $\mathrm{B}$-powersmooth, from a composite number. The idea comes from [Fermat's little theorem](phi-function.md#application). Let a factorization of $n$ be $n = p \cdot q$. From 53639d500284ae160ca2e9f968957c360b6f2040 Mon Sep 17 00:00:00 2001 From: t0wbo2t <52655804+t0wbo2t@users.noreply.github.com> Date: Thu, 29 May 2025 11:20:00 +0530 Subject: [PATCH 324/324] Update src/algebra/factorization.md Co-authored-by: Oleksandr Kulkov --- src/algebra/factorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/algebra/factorization.md b/src/algebra/factorization.md index 9d9ab7ed7..14715605f 100644 --- a/src/algebra/factorization.md +++ b/src/algebra/factorization.md @@ -190,7 +190,7 @@ Therefore, we will try to perform the $\gcd$ multiple times, while we compute $M Some composite numbers don't have factors $p$ s.t. $p-1$ is $\mathrm{B}$-powersmooth for small $\mathrm{B}$. For example, for the composite number $100~000~000~000~000~493 = 763~013 \cdot 131~059~365~961$, values $p-1$ are $190~753$-powersmooth and $1~092~161~383$-powersmooth correspondingly. -We will have to choose $B >= 190~753$ to factorize the number. +We will have to choose $B \geq 190~753$ to factorize the number. In the following implementation we start with $\mathrm{B} = 10$ and increase $\mathrm{B}$ after each each iteration.