0% found this document useful (0 votes)
106 views

Competitive Programmer Notebook

This document contains a team notebook that outlines various algorithms and data structures implemented by the team. It includes sections on dynamic programming, data structures, geometry, graphs, and mathematics. Specific topics covered include closest pair problem, convex hull, BIT, disjoint set, articulation points, Bellman-Ford, Lucas theorem, modular exponentiation, and more.

Uploaded by

Rafid Hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Competitive Programmer Notebook

This document contains a team notebook that outlines various algorithms and data structures implemented by the team. It includes sections on dynamic programming, data structures, geometry, graphs, and mathematics. Specific topics covered include closest pair problem, convex hull, BIT, disjoint set, articulation points, Bellman-Ford, Lucas theorem, modular exponentiation, and more.

Uploaded by

Rafid Hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Team notebook

October 27, 2016

Contents 4.3 DAG Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20


4.4 Dinic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1 DP 2 4.5 Edmonds Karp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1 digitDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4.6 EulerianPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.2 nthPerm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4.7 Floyed Warshall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.8 Kruskal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2 Data Structure 3 4.9 Max BPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1 BIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4.10 MinCostMatching . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Disjoint Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.11 MinCostMaxFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 HLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.12 SCC Kosaraju . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 LCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.13 directed mst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Mo’s Algo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.14 konig’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6 SQRT Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.15 minimum path cover in DAG . . . . . . . . . . . . . . . . . . . . . 30
2.7 STL order statistics tree II . . . . . . . . . . . . . . . . . . . . . . 8 4.16 planar graph (euler) . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.8 STL order statistics tree . . . . . . . . . . . . . . . . . . . . . . . . 9 4.17 stable marriage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.9 Segment Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.18 two sat (with kosaraju) . . . . . . . . . . . . . . . . . . . . . . . . 31
2.10 hash table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.11 persistent seg tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5 Math 33
2.12 sliding window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1 Lucas theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.13 trie xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.2 big mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.14 trie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3 catalan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 Geometry 14 5.5 crt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1 closest pair problem . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.6 cumulative sum of divisors . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 convex hull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.7 discrete logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.8 ext euclidean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4 template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.9 fibonacci properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.10 highest exponent factorial . . . . . . . . . . . . . . . . . . . . . . . 35
5.11 miller rabin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4 Graph 20 5.12 mod inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1 Articulation Point . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.13 mod mul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Bellman Ford . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.14 mod pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

1
5.15 number theoretic transform . . . . . . . . . . . . . . . . . . . . . . 36 }
5.16 pollard rho factorize . . . . . . . . . . . . . . . . . . . . . . . . . . 37 } else {
5.17 sievephi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 for( int i=1; i<=sesh; i++ ) {
5.18 sigma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 ret += rec( false, small || i < digit[pos], pos+1, ( i & prev
) + value, i );
5.19 sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
}
5.20 totient sieve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 ret += rec( true, true, pos + 1, 0, 0 );
5.21 totient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 }
return ret;
6 Matrix 39 }
6.1 Gaussin Elimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Matrix Expo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 ll calc( ll n ) {

7 Misc 40 digit.clear();
7.1 IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
while( n ) {
8 String 41 digit.push_back( n&1 );
n >>= 1;
8.1 KMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
}
8.2 Manacher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.3 Z algo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 lim = digit.size();

reverse( digit.begin(), digit.end() );


1 DP
tt++;
1.1 digitDP return rec( true, false, 0, 0, 0 );
}
vector < int > digit;
ll lim;
ll dp[3][3][31][31][3];
int vis[3][3][31][31][3];
1.2 nthPerm
int tcase, tt;
long long fac[26];
ll rec( bool start, bool small, int pos, ll value, int prev ) { long long pos, n;
if( pos == lim ) return value; int freq[26];
ll &ret = dp[start][small][pos][value][prev];
int &v = vis[start][small][pos][value][prev]; void init() {
if( v == tt ) return ret; fac[0] = 1;
v = tt; for( int i=1; i<26; i++ ) fac[i] = fac[i-1] * i;
int sesh = small ? 1 : digit[pos]; }
ret = 0;
if( !start ) { long long koita( int n ) {
for( int i=0; i<=sesh; i++ ) { long long ret = fac[n];
ret += rec( false, small || i < digit[pos], pos+1, ( i & prev for( int i=0; i<26; i++ ) ret /= fac[ freq[i] ];
) + value, i ); return ret;

2
} for( ; y<n; y+=(y&-y) ) {
tree[x][y] += v;
void solve( int sz ) { }
long long upto, now; }
bool found;
while( sz ) { void update( int x, int y, int v ) {
upto = 0; for( ; x<n; x+=(x&-x) ) {
found = 0; update_y( x, y, v );
for( int i=0; i<26 && !found; i++ ) { }
if( freq[i] == 0 ) continue; }
freq[i]--;
now = koita( sz-1 ); int query_y( int x, int y ) {
if( now + upto >= n ) { int ret = 0;
n -= upto; for( ; y; y-=(y&-y) ) {
sz--; ret += tree[x][y];
putchar( ’a’ + i ); }
found = 1; return ret;
} else { }
freq[i]++;
upto += now; int query( int x, int y ) {
} int ret = 0;
} for( ; x; x-=(x&-x) ) {
if( !found ) break; ret += query_y( x, y );
} }
putchar( ’\n’ ); return ret;
} }

int query( int x1, int y1, int x2, int y2 ) {


return ( query( x2, y2 ) - query( x2, y1-1 ) - query( x1-1, y2 ) +
query( x1-1, y1-1 ) );
2 Data Structure }
}
2.1 BIT
struct BIT {
int n;
using vi = vector < int >;
vi tree;
using vii = vector < vi >;
BIT () {}
struct BIT_2D {
BIT ( int _n ) : n( _n ), tree( _n, 0 ) {}
int n;
~BIT () {}
vii tree;
void update( int x, int v ) {
for( ; x<n; y+=(x&-x) ) {
BIT_2D () {}
tree[x] += v;
BIT_2D ( int _n ) : n( _n ), tree( _n, vi( _n, 0 ) ) {}
}
~BIT_2D () {}
}
void update_y( int x, int y, int v ) {

3
10583 - Ubiquitous Religions
int query( int x ) { **/
int ret = 0;
for( ; x; x-=(x&-x) ) { struct Disjoint_Set {
ret += tree[x]; int n;
} vector < int > par, cnt, rank;
return ret;
} Disjoint_Set( int n ) : n(n), rank(n), par(n), cnt(n) {}

int query( int x, int y, int x2, int y2 ) { void make_set() {


return ( query( y ) - query( x-1 ) ); for(int i=0; i<n; i++) {
} par[i] = i;
cnt[i] = 1;
int getind(int x) { rank[i] = 0;
int idx = 0, mask = n; }
while( mask && idx < n ) { }
int t = idx + mask;
if( x >= tree[t] ) { int find_rep( int x ) {
idx = t; if(x != par[ x ]) {
x -= tree[t]; par[ x ] = find_rep( par[ x ] );
} }
mask >>= 1; return par[ x ];
} }
return idx;
} int union_( int u, int v ) {
} if( ( u = find_rep( u ) ) != ( v = find_rep( v ) ) ) {
if( rank[ u ] < rank[ v ] ) {
cnt[ v ] += cnt[ u ];
par[ u ] = par[ v ];
2.2 Disjoint Set return cnt[v];
} else {
rank[ u ] = max( rank[ u ], rank[ v ] + 1 );
/**
cnt[ u ] += cnt[ v ];
Implementation of Disjoint-Set Union Data Structure
par[ v ] = par[ u ];
Running time:
}
O(nlog(n))
}
Usage:
return cnt[u];
- call make_set() to reset the set
}
- call find_rep() to get the set of the vertex
};
- call union_() to merge to sets
Input:
- n, number of sets
Tested Problems:
UVA:
2.3 HLD
10608 - Friends
11503 - Virtual Friends

4
/** subTreeSize[x] = cnt;
}
Heavy Light Decomposition
void hld(int x) {
Problem (lightoj 1348 - Aladin & return journey) if (chainHead[chainNumber] == -1) chainHead[chainNumber] = x;
chain[chainNumber].PB(vi[x]);
Description: chainInd[x] = chainNumber;
chainPos[x] = chain[chainNumber].size()-1;
** graph -> for storing initial graph int ind = -1, mx = -1, i, j, u, v;
** parent -> for storing parents for (i=0; i<graph[x].size(); i++) {
** ChainHead -> head of each chain u = graph[x][i];
** ChainPos -> position of each node in it’s chain if (chainInd[u] == -1 && subTreeSize[u] > mx) {
** ChainInd -> chain number of each node mx = subTreeSize[u];
** chainNumber -> it’s the chain count, initialy it’s 0 ind = u;
}
Steps: }
if (ind != -1) {
1) Set chainNumber to 0 hld(ind);
2) Run dfs function from the root node to store subtree size }
3) Run HLD function from root to generate HLD for (i=0; i<graph[x].size(); i++) {
u = graph[x][i];
**/ if (chainInd[u] == -1) {
chainNumber++;
hld(u);
}
int chainHead[MX]; }
int chainPos[MX]; }
int chainInd[MX];
int parent[MX];
int subTreeSize[MX];
int chainNumber; 2.4 LCA
vector <int> graph[MX];
vector <int> chain[MX];
//LCA using sparse table
//Complexity: O(NlgN,lgN)
void dfs(int x) {
#define mx 100002
vis[x] = 1;
int L[mx]; //
int cnt,i,j;
int P[mx][22]; //
cnt = 1;
int T[mx]; //
for (i=0; i<graph[x].size(); i++) {
vector<int>g[mx];
if (!vis[graph[x][i]]) {
void dfs(int from,int u,int dep)
dfs(graph[x][i]);
{
cnt += subTreeSize[graph[x][i]];
T[u]=from;
parent[graph[x][i]] = x;
L[u]=dep;
}
for(int i=0;i<(int)g[u].size();i++)
}
{

5
int v=g[u][i]; if (P[i][j - 1] != -1)
if(v==from) continue; P[i][j] = P[P[i][j - 1]][j - 1];
dfs(u,v,dep+1); }
}
}

int lca_query(int N, int p, int q) //N= 2.5 Mo’s Algo


{
int tmp, log, i;
/**
Implementation of Mo’s Algo with SQRT-Decomposition Data Structure
if (L[p] < L[q])
Running time:
tmp = p, p = q, q = tmp;
O( ( n + q ) * sqrt( n ) * f() )
Mo’s Algo is a algorithm to process queries offline
log=1;
For it to work, this condition must be satisified:
while(1) {
1) There can be no updates in the array
int next=log+1;
2) All queries must be known beforehand
if((1<<next)>L[p])break;
Tested Problems:
log++;
CF:
220B - Little Elephant and Array
}
**/
#include <bits/stdc++.h>
for (i = log; i >= 0; i--)
using namespace std;
if (L[p] - (1 << i) >= L[q])
p = P[p][i];
using piii = pair < pair < int, int >, int >;
const int mx = 1e5 + 1;
if (p == q)
int BLOCK_SIZE;
return p;
int n, m;
int calc;
for (i = log; i >= 0; i--)
int ar[mx];
if (P[p][i] != -1 && P[p][i] != P[q][i])
int ans[mx];
p = P[p][i], q = P[q][i];
unordered_map < int, int > cnt;
piii query[mx];
return T[p];
}
struct {
bool operator()( const piii &a, const piii &b ) {
void lca_init(int N)
int block_a = a.first.first / BLOCK_SIZE;
{
int block_b = b.first.first / BLOCK_SIZE;
memset (P,-1,sizeof(P)); //
if( block_a != block_b ) {
-
return block_a < block_b;
int i, j;
}
for (i = 0; i < N; i++)
return a.first.second < b.first.second;
P[i][0] = T[i];
}
} cmp;
for (j = 1; 1 << j < N; j++)
for (i = 0; i < N; i++)
void add( int x ) {

6
calc -= ( cnt[x] == x ? 1 : 0 ); mo_r--;
cnt[x]++; }
calc += ( cnt[x] == x ? 1 : 0 );
} while( mo_l < left ) {
remove( ar[mo_l] );
void remove( int x ) { mo_l++;
calc -= ( cnt[x] == x ? 1 : 0 ); }
cnt[x]--;
calc += ( cnt[x] == x ? 1 : 0 ); while( mo_l > left ) {
} mo_l--;
add( ar[mo_l] );
int main() { }
#ifdef LU_SERIOUS
freopen( "in.txt", "r", stdin ); ans[ query[i].second ] = calc;
// freopen( "out.txt", "w+", stdout ); }
#endif // LU_SERIOUS
while( ~scanf( "%d %d", &n, &m ) ) { for( int i=0; i<m; i++ ) {
printf( "%d\n", ans[i] );
BLOCK_SIZE = sqrt( n ); }
cnt.clear(); }
calc = 0; return 0;
}
for( int i=0; i<n; i++ ) scanf( "%d", ar+i );

for( int i=0; i<m; i++ ) {


scanf( "%d %d", &query[i].first.first, &query[i].first.second 2.6 SQRT Decomposition
);
query[i].second = i;
/**
}
Implementation of SQRT-Decomposition Data Structure
Running time:
sort( query, query+m, cmp );
O( ( n + q ) * sqrt( n ) * f() )
Usage:
int mo_l = 0, mo_r = -1;
- call int() to initialize the array
- call update() to update the element in a position
for( int i=0; i<m; i++ ) {
- call query() to get ans from segment [L...R]
int left = query[i].first.first - 1;
Input:
int right = query[i].first.second - 1;
- n, number of elements
- n elements
while( mo_r < right ) {
- q queries
mo_r++;
Tested Problems:
add( ar[mo_r] );
lightOJ:
}
1082 - Array Queries
**/
while( mo_r > right ) {
#include <bits/stdc++.h>
remove( ar[mo_r] );
using namespace std;

7
#endif // LU_SERIOUS
const int mx = 1e5 + 1; scanf( "%d", &t );
const int sz = 1e3 + 1; for( cs=1; cs<=t; cs++ ) {
const int inf = 1e9; scanf( "%d %d", &n, &q );
int BLOCK_SIZE; BLOCK_SIZE = sqrt( n );
int n, q, t, cs, x, y; init();
int BLOCKS[sz]; for( int i=0; i<n; i++ ) {
int ar[mx]; scanf( "%d", &ar[i] );
update( i, ar[i] );
int getID( int idx ) { }
return idx / BLOCK_SIZE; printf( "Case %d:\n", cs );
} for( int i=0; i<q; i++ ) {
scanf( "%d %d", &x, &y );
void init() { printf( "%d\n", query( x-1, y-1 ) );
for( int i=0; i<sz; i++ ) BLOCKS[i] = inf; }
} }
return 0;
void update( int idx, int val ) { }
int id = getID( idx );
BLOCKS[id] = min( val, BLOCKS[id] );
}
2.7 STL order statistics tree II
int query( int l, int r ) {
int le = getID( l );
#include <bits/stdc++.h>
int ri = getID( r );
#include <ext/pb_ds/assoc_container.hpp>
int ret = inf;
#include <ext/pb_ds/tree_policy.hpp>
if( le == ri ) {
using namespace std;
for( int i=l; i<=r; i++ ) {
using namespace __gnu_pbds;
ret = min( ret, ar[i] );
}
typedef tree<int,null_type,less<int>,rb_tree_tag,
return ret;
tree_order_statistics_node_update> order_set;
}
order_set X;
for( int i=l; i<(le+1)*BLOCK_SIZE; i++ ) ret = min( ret, ar[i] );
for( int i=le+1; i<ri; i++ ) ret = min( ret, BLOCKS[i] );
int get(int y) {
for( int i=ri*BLOCK_SIZE; i<=r; i++ ) ret = min( ret, ar[i] );
int l=0,r=1e9+1;
while(l<r) {
return ret;
int m=l+((r-l)>>1);
}
if(m-X.order_of_key(m+1)<y)
l=m+1;
int main() {
else
#ifdef LU_SERIOUS
r=m;
freopen( "in.txt", "r", stdin );
}
// freopen( "out.txt", "w+", stdout );
return l;

8
}
using namespace __gnu_pbds;
main(){ using namespace std;
ios::sync_with_stdio(0);
cin.tie(0); typedef
int n,m; tree<
cin>>n>>m; pair<int,int>,
null_type,
for(int i=0;i<m;i++) { less<pair<int,int>>,
char a; rb_tree_tag,
int b; tree_order_statistics_node_update>
cin>>a>>b; ordered_set;
if(a==’L’)
cout<<get(b)<<endl; main()
else {
X.insert(get(b)); ios::sync_with_stdio(0);
} cin.tie(0);
} int n;
int sz=0;
/*** cin>>n;
Input vector<int> ans(n,0);
20 7
L 5 ordered_set t;
D 5 int x,y;
L 4 for(int i=0;i<n;i++)
L 5 {
D 5 cin>>x>>y;
L 4 ans[t.order_of_key({x,++sz})]++;
L 5 t.insert({x,sz});
}
Output
5 for(int i=0;i<n;i++)
4 cout<<ans[i]<<’\n’;
6 }
4
7 /***
***/ Input
5
1 1
5 1
2.8 STL order statistics tree 7 1
3 3
5 5
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
Output
#include <bits/stdc++.h>

9
1 int query( int node, int b, int e, int i, int j ) {
2 if( i > e || j < b ) {
1 return 0;
1 }
0 if( b >= i and e <= j ) {
***/ return tree[node].sum;
}

int left = node << 1;


2.9 Segment Tree int right = left | 1;
int mid = (b + e) >> 1;
struct info {
if( tree[node].prop != -1 ) {
int prop, sum;
tree[left].sum = ( mid - b + 1 ) * tree[node].prop;
} tree[ mx * 3 ];
tree[right].sum = ( e - mid ) * tree[node].prop;
tree[node].sum = tree[left].sum + tree[right].sum;
void update( int node, int b, int e, int i, int j, int x ) {
tree[left].prop = tree[node].prop;
// cerr << b << " " << e << " " << i << " " << j << " " << x << "\n";
tree[right].prop = tree[node].prop;
if( i > e || j < b ) {
tree[node].prop = -1;
return;
}
}
if( b >= i && e <= j ) {
int p1 = query( left, b, mid, i, j );
tree[node].sum = ( e - b + 1 ) * x;
int p2 = query( right, mid + 1, e, i, j );
tree[node].prop = x;
return;
}
return p1 + p2;
}
int left = node << 1;
int right = left | 1;
int mid = (b + e) >> 1;

if( tree[node].prop != -1 ) {
2.10 hash table
tree[left].sum = ( mid - b + 1 ) * tree[node].prop;
tree[right].sum = ( e - mid ) * tree[node].prop; /**
tree[node].sum = tree[left].sum + tree[right].sum; * Micro hash table, can be used as a set.
tree[left].prop = tree[node].prop; * Very efficient vs std::set
tree[right].prop = tree[node].prop; * */
tree[node].prop = -1;
} const int MN = 1001;
struct ht {
update(left, b, mid, i, j, x); int _s[(MN + 10) >> 5];
update(right, mid + 1, e, i, j, x); int len;
void set(int id) {
tree[node].sum = tree[left].sum + tree[right].sum; len++;
} _s[id >> 5] |= (1LL << (id & 31));
}

10
bool is_set(int id) { cur-> acc = len - cur-> acc;
return _s[id >> 5] & (1LL << (id & 31)); if (cur-> l) {
} cur-> l = copy_node(cur-> l);
}; cur-> l -> flip ^= 1;
}
if (cur-> r) {
cur-> r = copy_node(cur-> r);
2.11 persistent seg tree cur-> r -> flip ^= 1;
}
cur-> flip = 0;
/**
}
* Important:
}
* When using lazy propagation remembert to create new
* versions for each push_down operation!!!
int get_val(pnode cur) {
* */
assert(cur);
assert((cur-> flip) == 0);
struct node {
if (cur) return cur-> acc;
node *l, *r;
return 0;
long long acc;
}
int flip;
pnode update(pnode cur, int l, int r, int at, int what) {
node (int x) : l(NULL), r(NULL), acc(x), flip(0) {}
pnode ans = copy_node(cur);
node () : l(NULL), r(NULL), acc(0), flip(0) {}
if (l == r) {
};
assert(l == at);
ans-> acc = what;
typedef node* pnode;
ans-> flip = 0;
return ans;
pnode create(int l, int r) {
}
if (l == r) return new node();
int m = (l + r) >> 1;
pnode cur = new node();
push_down(ans, l, r);
int m = (l + r) >> 1;
if (at <= m) ans-> l = update(ans-> l, l, m, at, what);
cur-> l = create(l, m);
else ans-> r = update(ans-> r, m + 1, r, at, what);
cur-> r = create(m + 1, r);
return cur;
push_down(ans-> l, l, m);
}
push_down(ans-> r, m + 1, r);
ans-> acc = get_val(ans-> l) + get_val(ans-> r);
pnode copy_node(pnode cur) {
return ans;
pnode ans = new node();
}
*ans = *cur;
return ans;
pnode flip(pnode cur, int l, int r, int a, int b) {
}
pnode ans = new node();
void push_down(pnode cur, int l, int r) {
if (cur != NULL) {
assert(cur);
*ans = *cur;
if (cur-> flip) {
}
int len = r - l + 1;

11
if (l > b || r < a) deque< pair<int, int> > window;
return ans; vector<int> ans;
for (int i = 0; i < ARR.size(); i++) {
if (l >= a && r <= b) { if (mx) {
ans-> flip ^= 1; while (!window.empty() && window.back().first <= ARR[i])
push_down(ans, l, r); window.pop_back();
return ans; } else {
} while (!window.empty() && window.back().first >= ARR[i])
window.pop_back();
int m = (l + r) >> 1; }
ans-> l = flip(ans-> l, l, m, a, b); window.push_back(make_pair(ARR[i], i));
ans-> r = flip(ans-> r, m + 1, r, a, b);
push_down(ans-> l, l, m); while(window.front().second <= i - K)
push_down(ans-> r, m + 1, r); window.pop_front();
ans-> acc = get_val(ans-> l) + get_val(ans-> r);
return ans; ans.push_back(window.front().first);
} }
return ans;
long long get_all(pnode cur, int l, int r) { }
assert(cur);
push_down(cur, l, r);
return cur-> acc;
} 2.13 trie xor
void traverse(pnode cur, int l, int r) {
#define MAX 500001
if (!cur) return;
cout << l << " - " << r << " : " << (cur-> acc) << " " << (cur-> flip)
struct trie
<< endl;
{
traverse(cur-> l, l, (l + r) >> 1);
int cand[2];
traverse(cur-> l, 1 + ((l + r) >> 1), r);
trie()
}
{
clrall(cand,-1);
}
};
2.12 sliding window
trie tree[MAX*32+7];
ll csum;
/*
* Given an array ARR and an integer K, the problem boils down to int tot_node;
computing for each index i: min(ARR[i], ARR[i-1], ..., ARR[i-K+1]).
* if mx == true, returns the maximun. void insert_trie(int root,ll val)
* http://people.cs.uct.ac.za/~ksmith/articles/sliding_window_minimum.html {
* */ int i,j,k;
int fbit;
vector<int> sliding_window_minmax(vector<int> & ARR, int K, bool mx) { rvp(i,0,32)

12
{ root=tree[root].cand[cbit];
fbit=(int) ((val>>(ll) i)&1LL); }
if(tree[root].cand[fbit]==-1) }
{ return res;
tree[root].cand[fbit] = ++tot_node; }
}
root = tree[root].cand[fbit]; ll max_val(ll val)
} {
return ; int i,j,k;
} ll ret=0;
int gbit;
int delete_trie(int root,ll val,int i) rvp(i,0,32)
{ {
if(i==-1) return 0; gbit=(int) ((val>>(ll) i)&1LL);
int fbit; if(!gbit) ret|=(1LL << (ll) i);
fbit=(int) ((val>>(ll) i)&1LL); }
if(tree[root].cand[fbit]==-1) return 0; return ret;
int chld=delete_trie(tree[root].cand[fbit],val,i-1); }
if(chld==0)
{
tree[root].cand[fbit]=-1;
} 2.14 trie
int nchld=0;
if(tree[root].cand[fbit]!=-1) nchld++;
struct node
if(tree[root].cand[!fbit]!=-1) nchld++;
{
return nchld;
bool endmark;
}
node *next[26+1];
node()
ll solve(int root,ll cval)
{
{
endmark=false;
ll res=0;
for(int i=0;i<26;i++) next[i]=NULL;
int fbit,cbit;
}
int i,j,k;
}*root;
rvp(i,0,32)
{
void insert(char *str,int len)
fbit=(int) ((cval>>(ll) i)&1LL);
{
cbit=!(fbit);
node *curr=root;
if(tree[root].cand[fbit]!=-1)
for(int i=0;i<len;i++)
{
{
if(fbit) res|=(1LL << (ll) i);
int id=str[i]-’a’;
root=tree[root].cand[fbit];
if(curr->next[id]==NULL)
}
curr->next[id]=new node();
else
curr=curr->next[id];
{
}
if(cbit) res|=(1LL << (ll) i);
curr->endmark=true;

13
for (int i = 0; i < p.size(); ++i)
} for (int j = i + 1; j < p.size(); ++j)
bool search(char *str,int len) best = min(best, dist(p[i], p[j]));
{ return best;
node *curr=root; }
for(int i=0;i<len;i++)
{ int ls = (p.size() + 1) >> 1;
int id=str[i]-’a’; double l = (p[ls - 1].x + p[ls].x) * 0.5;
if(curr->next[id]==NULL) return false; vector<point> xl(ls), xr(p.size() - ls);
curr=curr->next[id]; unordered_set<int> left;
} for (int i = 0; i < ls; ++i) {
return curr->endmark; xl[i] = x[i];
} left.insert(x[i].id);
void del(node *cur) }
{ for (int i = ls; i < p.size(); ++i) {
for(int i=0;i<26;i++) xr[i - ls] = x[i];
if(cur->next[i]) }
del(cur->next[i]) ;
vector<point> yl, yr;
vector<point> pl, pr;
delete(cur) ; yl.reserve(ls); yr.reserve(p.size() - ls);
} pl.reserve(ls); pr.reserve(p.size() - ls);
for (int i = 0; i < p.size(); ++i) {
if (left.count(y[i].id))
yl.push_back(y[i]);
else
3 Geometry yr.push_back(y[i]);

3.1 closest pair problem if (left.count(p[i].id))


pl.push_back(p[i]);
else
struct point {
pr.push_back(p[i]);
double x, y;
}
int id;
point() {}
double dl = cp(pl, xl, yl);
point (double a, double b) : x(a), y(b) {}
double dr = cp(pr, xr, yr);
};
double d = min(dl, dr);
vector<point> yp; yp.reserve(p.size());
double dist(const point &o, const point &p) {
for (int i = 0; i < p.size(); ++i) {
double a = p.x - o.x, b = p.y - o.y;
if (fabs(y[i].x - l) < d)
return sqrt(a * a + b * b);
yp.push_back(y[i]);
}
}
for (int i = 0; i < yp.size(); ++i) {
double cp(vector<point> &p, vector<point> &x, vector<point> &y) {
for (int j = i + 1; j < yp.size() && j < i + 7; ++j) {
if (p.size() < 4) {
d = min(d, dist(yp[i], yp[j]));
double best = 1e100;

14
} dx2 = q.x - pivot.x;
} dy1 = p.y - pivot.y;
return d; dy2 = q.y - pivot.y;
} return atan2(dy1, dx1) < atan2(dy2, dx2); // atan2 function is used
for double
double closest_pair(vector<point> &p) { }
vector<point> x(p.begin(), p.end());
sort(x.begin(), x.end(), [](const point &a, const point &b) { void buildhull() // works if n>=3
return a.x < b.x; {
}); int i,j,id;
vector<point> y(p.begin(), p.end()); double x,y;
sort(y.begin(), y.end(), [](const point &a, const point &b) { g_point p,mx;
return a.y < b.y; hull.clear();
}); mx = vp[0];
return cp(p, x, y); id = 0;
} for (i=0;i<vp.size();i++)
{
p = vp[i];
if (p.y < mx.y || (p.y == mx.y && p.x > mx.x))
3.2 convex hull {
mx = p;
id = i;
/**
}
}
CONVEX HULL
swap(vp[0],vp[id]);
pivot = mx;
Definition:
sort(vp.begin()+1,vp.end(),cmp);
** Uses vp as an inner temporary vector to store hull
vector <g_point> stk;
** Uses hull vector to store the hull
int sz = vp.size() - 1;
stk.PB(vp[sz]);
**/
stk.PB(vp[0]);
stk.PB(vp[1]);
i = 2;
vector <g_point> vp;
while (i < n)
vector <g_point> hull;
{
p = vp[i];
g_point pivot;
sz = stk.size() - 1;
if (g_ccw(stk[sz],stk[sz-1],p))
bool cmp(g_point p, g_point q)
{
{
stk.PB(p);
if (g_collinear(p,pivot,q))
i++;
{
}
return g_dist(pivot,p) < g_dist(pivot,q);
else
}
{
double dx1,dx2,dy1,dy2;
stk.pop_back();
dx1 = p.x - pivot.x;

15
}
} bool point_in_box(square s1, point p) {
swap(hull,stk); if (cmp(s1.x1, p.x) != 1 && cmp(s1.x2, p.x) != -1 &&
} cmp(s1.y1, p.y) != 1 && cmp(s1.y2, p.y) != -1)
return true;
return false;
}
3.3 squares
bool inside(square &s1, square &s2) {
for (int i = 0; i < 4; ++i)
if (point_in_box(s2, s1.edges[i]))
typedef long double ld;
return true;
const ld eps = 1e-12;
return false;
int cmp(ld x, ld y = 0, ld tol = eps) {
}
return ( x <= y + tol) ? (x + tol < y) ? -1 : 0 : 1;
}
bool inside_vert(square &s1, square &s2) {
if ((cmp(s1.y1, s2.y1) != -1 && cmp(s1.y1, s2.y2) != 1) ||
struct point{
(cmp(s1.y2, s2.y1) != -1 && cmp(s1.y2, s2.y2) != 1))
ld x, y;
return true;
point(ld a, ld b) : x(a), y(b) {}
return false;
point() {}
}
};
bool inside_hori(square &s1, square &s2) {
struct square{
if ((cmp(s1.x1, s2.x1) != -1 && cmp(s1.x1, s2.x2) != 1) ||
ld x1, x2, y1, y2,
(cmp(s1.x2, s2.x1) != -1 && cmp(s1.x2, s2.x2) != 1))
a, b, c;
return true;
point edges[4];
return false;
square(ld _a, ld _b, ld _c) {
}
a = _a, b = _b, c = _c;
x1 = a - c * 0.5;
ld min_dist(square &s1, square &s2) {
x2 = a + c * 0.5;
if (inside(s1, s2) || inside(s2, s1))
y1 = b - c * 0.5;
return 0;
y2 = b + c * 0.5;
edges[0] = point(x1, y1);
ld ans = 1e100;
edges[1] = point(x2, y1);
for (int i = 0; i < 4; ++i)
edges[2] = point(x2, y2);
for (int j = 0; j < 4; ++j)
edges[3] = point(x1, y2);
ans = min(ans, min_dist(s1.edges[i], s2.edges[j]));
}
};
if (inside_hori(s1, s2) || inside_hori(s2, s1)) {
ld min_dist(point &a, point &b) {
if (cmp(s1.y1, s2.y2) != -1)
ld x = a.x - b.x,
ans = min(ans, s1.y1 - s2.y2);
y = a.y - b.y;
else
return sqrt(x * x + y * y);
if (cmp(s2.y1, s1.y2) != -1)
}

16
ans = min(ans, s2.y1 - s1.y2); }
} };

if (inside_vert(s1, s2) || inside_vert(s2, s1)) { struct g_line


if (cmp(s1.x1, s2.x2) != -1) {
ans = min(ans, s1.x1 - s2.x2); double a,b,c;
else g_line ()
if (cmp(s2.x1, s1.x2) != -1) {
ans = min(ans, s2.x1 - s1.x2); a = b = c = 0;
} }
g_line(double _a,double _b, double _c)
return ans; {
} a = _a; b = _b; c = _c;
}
};

3.4 template struct g_vector


{
double x,y;
const double EPS = 1e-9;
g_vector (double _x, double _y)
const double PI = acos(-1.0);
{
x = _x;
/** GeometryStructures **/
y = _y;
}
struct g_point
};
{
double x,y;
g_point()
/** Function List **/
{
x = y = 0;
//Geometry
}
double DEG_to_RAD (double theta); ///converts degree to radian
g_point(double _x,double _y)
double RAD_to_DEG (double theta); ///converst radian to degree
{
double g_dist(g_point p1, g_point p2); ///finds euclidian distance
x = _x; y = _y;
between two points
}
g_point g_rotate(g_point p, double theta); ///rotates point ’p’ by
bool operator < (g_point other) const
’theta’ degrees
{
void g_pointsToLine (g_point p1, g_point p2, g_line &l); ///initiates
if (fabs(x - other.x) > EPS)
line ’l’
{
bool g_areParallel(g_line l1, g_line l2); ///returns true if two lines
return x < other.x;
are parallel
}
bool g_areSame(g_line l1, g_line l2); ///returns true if two lines are
return y < other.y;
same or two segments (l1 and l2) are in same line
}
bool g_areIntersect(g_line l1, g_line l2, g_point &p); ///returns true if
bool operator == (g_point other) const
two lines intersect, sets the point of intersect as ’p’
{
return ((fabs(x - other.x) < EPS) && (fabs(y - other.y)));

17
g_vector g_toVec(g_point a, g_point b); ///returns a vector from point // cout<<c.x<<" "<<c.y<<endl;
’a’ -> ’b’ return 0;
g_vector g_scale(g_vector v, double s); ///returns a vector scaled or }
multiplied by ’s’
g_point g_translate(g_point p, g_vector v); ///returns a point which is a
translation of ’p’
double g_dot (g_vector a, g_vector b); ///returns dot product of two
vectors /** GeometryFunctions **/
double g_cross(g_vector a, g_vector b); ///returns cross product of two
vectors double DEG_to_RAD (double theta)
double g_norm_sq(g_vector v); ///returns v.x * v.x + v.y * v.y, essential {
for finding distance of point to line segment and angle between return ((theta * PI)/180.0);
points. }
double g_distToLine(g_point p, g_point a, g_point b, g_point &c);
///returns shortest distance from point ’p’ to line a->b, stores double RAD_to_DEG (double theta)
closest point to as ’c’ {
double g_distToLineSegment(g_point p, g_point a, g_point b, g_point &c); return ((theta * 180.0) /PI);
/// returns shortest distance from point ’p’ to lineSegment a->b, }
stores closest point to as ’c’
double g_angle(g_point a, g_point o, g_point b); ///return angle <aob double g_dist(g_point p1, g_point p2)
bool g_ccw(g_point q, g_point p, g_point r); ///returns true if ’r’ in {
left side of line p->q (counter clock-wise test) return hypot(p1.x - p2.x, p1.y - p2.y);
bool g_cw(g_point q, g_point p, g_point r); ///returns true if ’r’ in }
right side of line p->q (clock-wise test)
bool g_collinear (g_point q, g_point p, g_point r); ///returns true if g_point g_rotate(g_point p, double theta)
three points are collinear; {
double rad = DEG_to_RAD(theta);
int GCD (int x, int y){if (x%y==0) return y; else return (GCD(y,x%y));} //rad = theta;
//rad = (theta *(180.0/PI));
int main() return g_point (p.x * cos(rad) - p.y * sin(rad), p.x * sin(rad) + p.y
{ * cos (rad));
#ifdef O_Amay_Valo_Basheni }
freopen("get.txt","r",stdin);
#endif // O_Amay_Valo_Basheni void g_pointsToLine (g_point p1, g_point p2, g_line &l)
g_point a(2,2),b(4,3),c(3,2); {
g_vector ab = g_toVec(a,b); if (fabs(p1.x - p2.x) < EPS)
//ab.x/=2; ab.y/=2; {
c = g_translate(c,ab); l.a = 1.0; l.b = 0.0; l.c = -p1.x;
cout<<c.x<<" "<<c.y<<endl; }
c = g_rotate(c,360); else
cout<<c.x<<" "<<c.y<<endl; {
c = g_rotate(c,180); l.b = 1.0;
cout<<c.x<<" "<<c.y<<endl; l.a = -(double) (p1.y - p2.y)/ (p1.x - p2.x);
// double d = DEG_to_RAD(360); l.c = -(double) (l.a * p1.x) - p1.y;
// c = g_rotate(c,d); }

18
} return (a.x * b.y - a.y * b.x);
}
bool g_areParallel(g_line l1, g_line l2)
{ double g_norm_sq(g_vector v)
return ((fabs (l1.a - l2.a) < EPS) && (fabs(l1.b - l2.b) < EPS)); {
} return v.x * v.x + v.y * v.y;
}
bool g_areSame(g_line l1, g_line l2)
{ double g_distToLine(g_point p, g_point a, g_point b, g_point &c)
return ((g_areParallel(l1,l2)) && (fabs(l1.c - l2.c) < EPS)); {
} g_vector ap = g_toVec(a,p);
g_vector ab = g_toVec(a,b);
bool g_areIntersect(g_line l1, g_line l2, g_point &p) double u = g_dot(ap,ab) / g_norm_sq(ab);
{ c = g_translate(a,g_scale(ab,u));
if (g_areParallel(l1,l2)) return false; return g_dist (p,c);
p.x = (l2.b * l1.c - l1.b * l2.c) / (l2.a * l1.b - l1.a * l2.b); }
if (fabs(l1.b) > EPS)
p.y = -(l1.a * p.x + l1.c); double g_distToLineSegment(g_point p, g_point a, g_point b, g_point &c)
else {
p.y = -(l2.a * p.x + l2.c); g_vector ap = g_toVec(a,p);
return true; g_vector ab = g_toVec(a,b);
} double u = g_dot(ap,ab) / g_norm_sq(ab);
if (u < 0.0)
g_vector g_toVec(g_point a, g_point b) {
{ return g_dist(p,a);
return g_vector (b.x - a.x, b.y - a.y); }
} if (u > 1.0)
{
g_vector g_scale(g_vector v, double s) return g_dist(p,b);
{ }
return g_vector(v.x * s, v.y * s); c = g_translate(a,g_scale(ab,u));
} return g_dist(p,c);
}
g_point g_translate(g_point p, g_vector v)
{ double g_angle(g_point a, g_point o, g_point b)
return g_point(p.x + v.x, p.y + v.y); {
} g_vector oa = g_toVec(o,a);
g_vector ob = g_toVec(o,b);
double g_dot(g_vector a, g_vector b) return acos (g_dot(oa,ob) / sqrt(g_norm_sq(oa) * g_norm_sq(ob)));
{ }
return (a.x * b.x + a.y * b.y);
} bool g_ccw(g_point q, g_point p, g_point r)
{
double g_cross(g_vector a, g_vector b) return g_cross (g_toVec(p,q),g_toVec(p,r)) > 0;
{ }

19
**/
bool g_cw(g_point q, g_point p, g_point r)
{ const int mx = 1e4 + 10;
return g_cross (g_toVec(p,q),g_toVec(p,r)) < 0; vector < int > G[mx];
} int tim, root, n, m, a, b;
int ap[mx], vis[mx], low[mx], d[mx], par[mx];
bool g_collinear (g_point q, g_point p, g_point r)
{ void ap_dfs(int u) {
return fabs(g_cross(g_toVec(p,q),g_toVec(p,r))) < EPS; tim++;
} int cnt = 0;
low[u] = tim;
d[u] = tim;
vis[u] = 1;
3.5 triangles int v;
Let a, b, c be length of the three sides of a triangle. for(int i=0; i<G[u].size(); i++) {
v = G[u][i];
if( v == par[u] ) continue;
p = (a + b + c) ∗ 0.5
if( !vis[v] ) {
The inradius is defined by: par[u] = v;
s ap_dfs( v );
(p − a)(p − b)(p − c) low[u] = min( low[u], low[v] );
iR = /// d[u] < low[v] if bridge is needed
p if( d[u] <= low[v] && u != root ) {
The radius of its circumcircle is given by the formula: ap[u] = 1;
}
abc cnt++;
cR = p } else {
(a + b + c)(a + b − c)(a + c − b)(b + c − a)
low[u] = min( low[u], d[v] );
}
4 Graph if( u == root && cnt > 1 ) ap[u] = 1;
}
4.1 Articulation Point }

/**
An O(V+E) approach:
-------------------
4.2 Bellman Ford
- perform a DFS on the graph.
- compute d(i) and low(i) for each vertex 1 ... i struct Edge {
d(i): dfs number of i, represents the discovery time. int u, v, w;
low(i): the least dfn reachable from i throguh a path consisting };
of zero or vector < Edge > vs;
more edges follwoing by zero or one back edges. int n, dis[205];
- vertex u is an AP if and only if:
- u is the root of the dfs tree and has at least two children. void bell() {
- u is not the root and has a child v for which low(v) >= d(u). dis[1] = 0;

20
for(int i = 1; i < n; i++) { 4.4 Dinic
for(int j = 0; j < vs.size(); j++) {
if( dis[ vs[j].v ] > dis[ vs[j].u ] + vs[j].w ) {
// Adjacency list implementation of Dinic’s blocking flow algorithm.
dis[vs[j].v] = dis[vs[j].u] + vs[j].w;
// This is very fast in practice, and only loses to push-relabel flow.
}
//
}
// Running time:
}
// O(|V|^2 |E|)
}
//
// INPUT:
// - graph, constructed using AddEdge()
// - source and sink
4.3 DAG Check //
// OUTPUT:
// - maximum flow value
vector < int > G[mx], tops; // - To obtain actual flow values, look at edges with capacity > 0
bool vis[mx], bg[mx]; // (zero capacity edges are residual edges).
bool dfs( int u ) { #include<cstdio>
bool ret = true; #include<vector>
if( !vis[u] ) { #include<queue>
vis[u] = 1; using namespace std;
bg[u] = 1; typedef long long LL;
int v;
for( int i=0; i<G[u].size(); i++ ) { struct Edge {
v = G[u][i]; int u, v;
if( !vis[v] ) { LL cap, flow;
ret &= dfs( v ); Edge() {}
} Edge(int u, int v, LL cap): u(u), v(v), cap(cap), flow(0) {}
if( bg[v] ) return false; };
}
} struct Dinic {
bg[u] = false; int N;
return true; vector<Edge> E;
} vector<vector<int>> g;
vector<int> d, pt;
bool dag( int n) {
memset( vis, 0, sizeof vis ); Dinic(int N): N(N), E(0), g(N), d(N), pt(N) {}
memset( bg, 0, sizeof bg );
bool ret = true; void AddEdge(int u, int v, LL cap) {
for( int i=1; i<=n; i++ ) { if (u != v) {
if( !vis[i] ) { E.emplace_back(Edge(u, v, cap));
ret &= dfs( i ); g[u].emplace_back(E.size() - 1);
} E.emplace_back(Edge(v, u, 0));
} g[v].emplace_back(E.size() - 1);
}

21
} return total;
}
bool BFS(int S, int T) { };
queue<int> q({S});
fill(d.begin(), d.end(), N + 1); // BEGIN CUT
d[S] = 0; // The following code solves SPOJ problem #4110: Fast Maximum Flow
while(!q.empty()) { (FASTFLOW)
int u = q.front(); q.pop();
if (u == T) break; int main()
for (int k: g[u]) { {
Edge &e = E[k]; int N, E;
if (e.flow < e.cap && d[e.v] > d[e.u] + 1) { scanf("%d%d", &N, &E);
d[e.v] = d[e.u] + 1; Dinic dinic(N);
q.emplace(e.v); for(int i = 0; i < E; i++)
} {
} int u, v;
} LL cap;
return d[T] != N + 1; scanf("%d%d%lld", &u, &v, &cap);
} dinic.AddEdge(u - 1, v - 1, cap);
dinic.AddEdge(v - 1, u - 1, cap);
LL DFS(int u, int T, LL flow = -1) { }
if (u == T || flow == 0) return flow; printf("%lld\n", dinic.MaxFlow(0, N - 1));
for (int &i = pt[u]; i < g[u].size(); ++i) { return 0;
Edge &e = E[g[u][i]]; }
Edge &oe = E[g[u][i]^1];
if (d[e.v] == d[e.u] + 1) { // END CUT
LL amt = e.cap - e.flow;
if (flow != -1 && amt > flow) amt = flow;
if (LL pushed = DFS(e.v, T, amt)) {
e.flow += pushed; 4.5 Edmonds Karp
oe.flow -= pushed;
return pushed;
/**
}
Implementation of Edmonds-Karp max flow algorithm
}
Running time:
}
O(|V|*|E|^2)
return 0;
Usage:
}
- add edges by add_edge()
- call max_flow() to get maximum flow in the graph
LL MaxFlow(int S, int T) {
Input:
LL total = 0;
- n, number of nodes
while (BFS(S, T)) {
- directed, true if the graph is directed
fill(pt.begin(), pt.end(), 0);
- graph, constructed using add_edge()
while (LL flow = DFS(S, T))
- source, sink
total += flow;
Output:
}
- Maximum flow

22
Tested Problems: par[ i ] = u;
CF: }
653D - Delivery Bears }
UVA: }
820 - Internet Bandwidth return par[ sink ] != -1;
10330 - Power Transmission }
**/
int min_val( int i ) {
#include <bits/stdc++.h> int ret = INF;
using namespace std; for( ; par[ i ] != -1; i = par[ i ] ) {
ret = min( ret, graph[ par[i] ][ i ] );
const int INF = 1e9; }
return ret;
struct edmonds_karp { }
int n;
vector < int > par; void augment_path( int val, int i ) {
vector < bool > vis; for( ; par[ i ] != -1; i = par[ i ] ) {
vector < vector < int > > graph; graph[ par[i] ][ i ] -= val;
graph[ i ][ par[i] ] += val;
edmonds_karp () {} }
edmonds_karp( int _n ) : n( _n ), par( _n ), vis( _n ), graph( _n, }
vector< int > ( _n, 0 ) ) {}
~edmonds_karp() {} int max_flow( int src, int sink ) {
int min_cap, ret = 0;
void add_edge( int from, int to, int cap, bool directed ) { while( bfs( src, sink ) ) {
this->graph[ from ][ to ] += cap; augment_path( min_cap = min_val( sink ), sink );
this->graph[ to ][ from ] = directed ? graph[ to ][ from ] + cap : ret += min_cap;
graph[ to ][ from ] ; }
} return ret;
}
bool bfs( int src, int sink ) { };
int u;
fill( vis.begin(), vis.end(), false );
fill( par.begin(), par.end(), -1 );
vis[ src ] = true; 4.6 EulerianPath
queue < int > q;
q.push( src );
struct Edge;
while( !q.empty() ) {
typedef list<Edge>::iterator iter;
u = q.front();
q.pop();
struct Edge
if( u == sink ) return true;
{
for(int i=0; i<n; i++) {
int next_vertex;
if( graph[u][i] > 0 and not vis[i] ) {
iter reverse_edge;
q.push( i );
vis[ i ] = true;
Edge(int next_vertex)

23
:next_vertex(next_vertex) 544 - Heavy Cargo - MaxiMin path
{ } 567 - Risk - APSP
}; **/

const int max_vertices = ; using vi = vector < int >;


int num_vertices; using vvi = vector < vi >;
list<Edge> adj[max_vertices]; // adjacency list
/// mat[i][i] = 0, mat[i][j] = distance from i to j, path[i][j] = i
vector<int> path; void APSP( vvi &mat, vvi &path ) {
int V = mat.size();
void find_path(int v) for( int via=0; via; via<V; via++ ) {
{ for( int from=0; from<V; from++ ) {
while(adj[v].size() > 0) for( int to=0; to<V; to++ ) {
{ if( mat[ from ][ via ] + mat[ via ][ to ] < mat[ from ][
int vn = adj[v].front().next_vertex; to ] ) {
adj[vn].erase(adj[v].front().reverse_edge); mat[ from ][ to ] = mat[ from ][ via ] + mat[ via ][ to
adj[v].pop_front(); ];
find_path(vn); path[ from ][ to ] = path[ via ][ to ];
} }
path.push_back(v); }
} }
}
void add_edge(int a, int b) }
{
adj[a].push_front(Edge(b)); /// prints the path from i to j
iter ita = adj[a].begin(); void print( int i, int j ) {
adj[b].push_front(Edge(a)); if( i != j ) {
iter itb = adj[b].begin(); print( i, path[i][j] );
ita->reverse_edge = itb; }
itb->reverse_edge = ita; cout << j << "\n";
} }

/// check if negative cycle exists


bool negative_cycle( vvi &mat ) {
4.7 Floyed Warshall APSP( mat );
return mat[0][0] < 0;
}
/**
Implementation of Floyd Warshall Alogrithm
void transtitive_closure( vvi &mat ) {
Running time:
int V = mat.size();
O( |v| ^ 3 )
for( int via=0; via; via<V; via++ ) {
Input:
for( int from=0; from<V; from++ ) {
- n, number vertex
for( int to=0; to<V; to++ ) {
- graph, inputed as an adjacency matrix
mat[ from ][ to ] |= ( mat[ from ][ via ] & mat[ via ][ to
Tested Problems:
] );
UVA:

24
} Input:
} - n, number of nodes, provided when init() is called
} - graph, constructed using add_edge()
} Output:
- weight of minimum spanning tree
/// finding a path between two nodes that maximizes the minimum cost - prints the mst
void mini_max( vvi &mat ) { Tested Problems:
int V = mat.size(); UVA:
for( int via=0; via; via<V; via++ ) { 1208 - Oreon
for( int from=0; from<V; from++ ) { */
for( int to=0; to<V; to++ ) {
mat[ from ][ to ] = min( mat[ from ][ to ], max( mat[ from vector< edge > edges;
][ via ], mat[ via ][ to ] ) ); vector < int > par, cnt, rank;
} int N;
}
}
} int kruskal() {
int ret = 0;
/// finding a path between two nodes that minimizes the maximum cost make_set();
/// eg: max load a truck can carry from one node to another node where sort( edges.begin(), edges.end() );
/// the paths have weight limit cout << "Case " << ++cs << ":\n";
void maxi_min( vvi &mat ) { for( edge e : edges ) {
int V = mat.size(); int u = e.u;
for( int via=0; via; via<V; via++ ) { int v = e.v;
for( int from=0; from<V; from++ ) { if( ( u = find_rep( u ) ) != ( v = find_rep( v ) ) ) {
for( int to=0; to<V; to++ ) { if( rank[ u ] < rank[ v ] ) {
mat[ from ][ to ] = max( mat[ from ][ to ], min( mat[ from cnt[ v ] += cnt[ u ];
][ via ], mat[ via ][ to ] ) ); par[ u ] = par[ v ];
} } else {
} rank[ u ] = max( rank[ u ], rank[ v ] + 1 );
} cnt[ u ] += cnt[ v ];
} par[ v ] = par[ u ];
}
cout << city[ e.u ] << "-" << city[ e.v ] << " " << e.cost <<
"\n";
4.8 Kruskal ret += e.cost;
}
}
/**
return ret;
Implementation of Kruskal’s minimum spanning tree algorithm
}
Running time:
O(|E|log|V|)
Usage:
- initialize by calling init() 4.9 Max BPM
- add edges by add_edge()
- call kruskal() to generate minimum spanning tree

25
typedef long long int ll; // cost[i][j] = cost for pairing left node i with right node j
typedef vector<int> VI; // Lmate[i] = index of right node that left node i pairs with
typedef vector<VI> VVI; // Rmate[j] = index of left node that right node j pairs with
const int inf = 1e9; //
int clr[1001]; // The values in cost[i][j] may be positive or negative. To perform
VVI g; // maximization, simply negate the cost[][] matrix.
//////////////////////////////////////////////////////////////////////
bool FindMatch(int i, const VVI &w, VI &mr, VI &mc, VI &seen) {
for (int j = 0; j < w[i].size(); j++) { #include <algorithm>
if (w[i][j] && !seen[j]) { #include <cstdio>
seen[j] = true; #include <cmath>
if (mc[j] < 0 || FindMatch(mc[j], w, mr, mc, seen)) { #include <vector>
mr[i] = j;
mc[j] = i; using namespace std;
return true;
} typedef vector<double> VD;
} typedef vector<VD> VVD;
} typedef vector<int> VI;
return false;
} double MinCostMatching(const VVD &cost, VI &Lmate, VI &Rmate) {
int n = int(cost.size());
int BipartiteMatching(const VVI &w ) {
VI mr = VI(w.size(), -1); // construct dual feasible solution
VI mc = VI(w[0].size(), -1); VD u(n);
VD v(n);
int ct = 0; for (int i = 0; i < n; i++) {
for (int i = 0; i < w.size(); i++) { u[i] = cost[i][0];
VI seen(w[0].size()); for (int j = 1; j < n; j++) u[i] = min(u[i], cost[i][j]);
if (FindMatch(i, w, mr, mc, seen)) ct++; }
} for (int j = 0; j < n; j++) {
return ct; v[j] = cost[0][j] - u[0];
} for (int i = 1; i < n; i++) v[j] = min(v[j], cost[i][j] - u[i]);
}

// construct primal solution satisfying complementary slackness


4.10 MinCostMatching Lmate = VI(n, -1);
Rmate = VI(n, -1);
int mated = 0;
//////////////////////////////////////////////////////////////////////
for (int i = 0; i < n; i++) {
// Min cost bipartite matching via shortest augmenting paths
for (int j = 0; j < n; j++) {
//
if (Rmate[j] != -1) continue;
// This is an O(n^3) implementation of a shortest augmenting path
if (fabs(cost[i][j] - u[i] - v[j]) < 1e-10) {
// algorithm for finding min cost perfect matchings in dense
Lmate[i] = j;
// graphs. In practice, it solves 1000x1000 problems in around 1
Rmate[j] = i;
// second.
mated++;
//

26
break; }
} }
}
} // update dual variables
for (int k = 0; k < n; k++) {
VD dist(n); if (k == j || !seen[k]) continue;
VI dad(n); const int i = Rmate[k];
VI seen(n); v[k] += dist[k] - dist[j];
u[i] -= dist[k] - dist[j];
// repeat until primal solution is feasible }
while (mated < n) { u[s] += dist[j];

// find an unmatched left node // augment along path


int s = 0; while (dad[j] >= 0) {
while (Lmate[s] != -1) s++; const int d = dad[j];
Rmate[j] = Rmate[d];
// initialize Dijkstra Lmate[Rmate[j]] = j;
fill(dad.begin(), dad.end(), -1); j = d;
fill(seen.begin(), seen.end(), 0); }
for (int k = 0; k < n; k++) Rmate[j] = s;
dist[k] = cost[s][k] - u[s] - v[k]; Lmate[s] = j;

int j = 0; mated++;
while (true) { }

// find closest double value = 0;


j = -1; for (int i = 0; i < n; i++)
for (int k = 0; k < n; k++) { value += cost[i][Lmate[i]];
if (seen[k]) continue;
if (j == -1 || dist[k] < dist[j]) j = k; return value;
} }
seen[j] = 1;

// termination condition
if (Rmate[j] == -1) break; 4.11 MinCostMaxFlow
// relax neighbors
// Implementation of min cost max flow algorithm using adjacency
const int i = Rmate[j];
// matrix (Edmonds and Karp 1972). This implementation keeps track of
for (int k = 0; k < n; k++) {
// forward and reverse edges separately (so you can set cap[i][j] !=
if (seen[k]) continue;
// cap[j][i]). For a regular max flow, set all edge costs to 0.
const double new_dist = dist[j] + cost[i][k] - u[i] - v[k];
//
if (dist[k] > new_dist) {
// Running time, O(|V|^2) cost per augmentation
dist[k] = new_dist;
// max flow: O(|V|^3) augmentations
dad[k] = j;
// min cost max flow: O(|V|^4 * MAX_EDGE_COST) augmentations
}
//

27
// INPUT: dad[k] = make_pair(s, dir);
// - graph, constructed using AddEdge() width[k] = min(cap, width[s]);
// - source }
// - sink }
//
// OUTPUT: L Dijkstra(int s, int t) {
// - (maximum flow value, minimum cost value) fill(found.begin(), found.end(), false);
// - To obtain the actual flow, look at positive values only. fill(dist.begin(), dist.end(), INF);
fill(width.begin(), width.end(), 0);
#include <cmath> dist[s] = 0;
#include <vector> width[s] = INF;
#include <iostream>
while (s != -1) {
using namespace std; int best = -1;
found[s] = true;
typedef vector<int> VI; for (int k = 0; k < N; k++) {
typedef vector<VI> VVI; if (found[k]) continue;
typedef long long L; Relax(s, k, cap[s][k] - flow[s][k], cost[s][k], 1);
typedef vector<L> VL; Relax(s, k, flow[k][s], -cost[k][s], -1);
typedef vector<VL> VVL; if (best == -1 || dist[k] < dist[best]) best = k;
typedef pair<int, int> PII; }
typedef vector<PII> VPII; s = best;
}
const L INF = numeric_limits<L>::max() / 4;
for (int k = 0; k < N; k++)
struct MinCostMaxFlow { pi[k] = min(pi[k] + dist[k], INF);
int N; return width[t];
VVL cap, flow, cost; }
VI found;
VL dist, pi, width; pair<L, L> GetMaxFlow(int s, int t) {
VPII dad; L totflow = 0, totcost = 0;
while (L amt = Dijkstra(s, t)) {
MinCostMaxFlow(int N) : totflow += amt;
N(N), cap(N, VL(N)), flow(N, VL(N)), cost(N, VL(N)), for (int x = t; x != s; x = dad[x].first) {
found(N), dist(N), pi(N), width(N), dad(N) {} if (dad[x].second == 1) {
flow[dad[x].first][x] += amt;
void AddEdge(int from, int to, L cap, L cost) { totcost += amt * cost[dad[x].first][x];
this->cap[from][to] = cap; } else {
this->cost[from][to] = cost; flow[x][dad[x].first] -= amt;
} totcost -= amt * cost[x][dad[x].first];
}
void Relax(int s, int k, L cap, L cost, int dir) { }
L val = dist[s] + pi[s] - pi[k] + cost; }
if (cap && val < dist[k]) { return make_pair(totflow, totcost);
dist[k] = val; }

28
}; map<string,int> mp;
stack < int > top_sorted;
// BEGIN CUT
// The following code solves UVA problem #10594: Data Flow void dfs_top_sort(int u) {
vis[u] = true;
int main() { for(int v: G[u]) {
int N, M; if(!vis[v]) {
dfs_top_sort( v );
while (scanf("%d%d", &N, &M) == 2) { }
VVL v(M, VL(3)); }
for (int i = 0; i < M; i++) top_sorted.push( u );
scanf("%Ld%Ld%Ld", &v[i][0], &v[i][1], &v[i][2]); }
L D, K;
scanf("%Ld%Ld", &D, &K); void top_sort() {
for(int i=1; i<=p; i++) {
MinCostMaxFlow mcmf(N+1); if(!vis[i]) {
for (int i = 0; i < M; i++) { dfs_top_sort(i);
mcmf.AddEdge(int(v[i][0]), int(v[i][1]), K, v[i][2]); }
mcmf.AddEdge(int(v[i][1]), int(v[i][0]), K, v[i][2]); }
} }
mcmf.AddEdge(0, 1, D, 0);
void dfs_kosaraju(int u) {
pair<L, L> res = mcmf.GetMaxFlow(0, N); vis[u] = true;
for(int v: gT[u]) {
if (res.first == D) { if(!vis[v]) {
printf("%Ld\n", res.second); dfs_kosaraju( v );
} else { }
printf("Impossible.\n"); }
} }
}
int kosaraju() {
return 0; memset( vis, false, sizeof(vis) );
} top_sort();
int u, ret = 0;
// END CUT memset( vis, false, sizeof(vis) );
while(!top_sorted.empty()) {
u = top_sorted.top();
top_sorted.pop();
4.12 SCC Kosaraju if(!vis[u])
dfs_kosaraju( u ), ret++;
}
#include <bits/stdc++.h>
return ret;
using namespace std;
}
int p, t;
bool vis[1001];
vector<int> G[1001], gT[1001];

29
4.13 directed mst if (u != root and id[u] < 0) { // Cycle
for (int v = pi[u]; v != u; v = pi[v])
id[v] = cur_id;
const int inf = 1000000 + 10;
id[u] = cur_id++;
}
struct edge {
}
int u, v, w;
edge() {}
if (cur_id == 0)
edge(int a,int b,int c) : u(a), v(b), w(c) {}
break;
};
for (int i = 0; i < cur_nodes; ++i)
/**
if (id[i] < 0) id[i] = cur_id++;
* Computes the minimum spanning tree for a directed graph
* - edges : Graph description in the form of list of edges.
for (int i = 0; i < edges.size(); ++i) {
* each edge is: From node u to node v with cost w
int u = edges[i].u, v = edges[i].v, w = edges[i].w;
* - root : Id of the node to start the DMST.
edges[i].u = id[u];
* - n : Number of nodes in the graph.
edges[i].v = id[v];
* */
if (id[u] != id[v])
edges[i].w -= lo[v];
int dmst(vector<edge> &edges, int root, int n) {
}
int ans = 0;
cur_nodes = cur_id;
int cur_nodes = n;
root = id[root];
while (true) {
}
vector<int> lo(cur_nodes, inf), pi(cur_nodes, inf);
for (int i = 0; i < edges.size(); ++i) {
return ans;
int u = edges[i].u, v = edges[i].v, w = edges[i].w;
}
if (w < lo[v] and u != v) {
lo[v] = w;
pi[v] = u;
} 4.14 konig’s theorem
}
In any bipartite graph, the number of edges in a maximum matching equals the
lo[root] = 0; number of vertices in a minimum vertex cover
for (int i = 0; i < lo.size(); ++i) {
if (i == root) continue;
4.15 minimum path cover in DAG
if (lo[i] == inf) return -1;
} Given a directed acyclic graph G = (V, E), we are to find the minimum number
int cur_id = 0; of vertex-disjoint paths to cover each vertex in V.
vector<int> id(cur_nodes, -1), mark(cur_nodes, -1); We can construct a bipartite graph G0 = (V out ∪ V in, E 0 ) from G, where :
for (int i = 0; i < cur_nodes; ++i) {
ans += lo[i];
int u = i;
while (u != root and id[u] < 0 and mark[u] != i) { V out = {v ∈ V : v has positive out − degree}
mark[u] = i; V in = {v ∈ V : v has positive in − degree}
u = pi[u];
} E 0 = {(u, v) ∈ V out × V in : (u, v) ∈ E}

30
Then it can be shown, via König’s theorem, that G’ has a matching of size m int m, w, m1;
if and only if there exists n − m vertex-disjoint paths that cover each vertex in while( cnt ) {
G, where n is the number of vertices in G and m is the maximum cardinality for( m=0; m<n; m++ ) if( !fl[m] ) break;
bipartite mathching in G’. for( int i=0; i<n && !fl[m]; i++ ) {
w = ar[m][i];
if( partners[w-n] == - 1 ) {
Therefore, the problem can be solved by finding the maximum cardinality
partners[w-n] = m;
matching in G’ instead. fl[m] = 1;
NOTE: If the paths are note necesarily disjoints, find the transitive closure cnt--;
and solve the problem for disjoint paths. } else {
m1 = partners[w-n];
4.16 planar graph (euler) if( !check( w, m, m1 ) ) {
partners[w-n] = m;
Euler’s formula states that if a finite, connected, planar graph is drawn in the fl[m] = 1;
plane without any edge intersections, and v is the number of vertices, e is the fl[m1] = 0;
number of edges and f is the number of faces (regions bounded by edges, including }
the outer, infinitely large region), then: }
}
}
f +v =e+2
for( int i=0; i<n; i++ ) {
It can be extended to non connected planar graphs with c connected compo- printf( " (%d %d)", partners[i]+1, i+n+1 );
nents: }
printf( "\n" );
}
f +v =e+c+1

4.17 stable marriage


4.18 two sat (with kosaraju)
int t, n;
int ar[201][101]; /**
int partners[101]; * Given a set of clauses (a1 v a2)^(a2 v a3)....
int fl[101]; * this algorithm find a solution to it set of clauses.
* test: http://lightoj.com/volume_showproblem.php?problem=1251
bool check( int w, int m, int m1 ) { **/
for( int i=0; i<n; i++ ) {
if( ar[w][i] == m1 ) return true; #include<bits/stdc++.h>
if( ar[w][i] == m ) return false; using namespace std;
} #define MAX 100000
return true; #define endl ’\n’
}
vector<int> G[MAX];
void stable_match() { vector<int> GT[MAX];
memset( partners, -1, sizeof partners ); vector<int> Ftime;
memset( fl, 0, sizeof fl ); vector<vector<int> > SCC;
int cnt = n; bool visited[MAX];

31
int n; * After having the SCC, we must traverse each scc, if in one SCC are -b
y b, there is not a solution.
* Otherwise we build a solution, making the first "node" that we find
void dfs1(int n){ truth and its complement false.
visited[n] = 1; **/

for (int i = 0; i < G[n].size(); ++i) {


int curr = G[n][i]; bool two_sat(vector<int> &val) {
if (visited[curr]) continue; kosaraju();
dfs1(curr); for (int i = 0; i < SCC.size(); ++i) {
} vector<bool> tmpvisited(2 * n, false);
for (int j = 0; j < SCC[i].size(); ++j) {
Ftime.push_back(n); if (tmpvisited[SCC[i][j] ^ 1]) return 0;
} if (val[SCC[i][j]] != -1) continue;
else {
void dfs2(int n, vector<int> &scc) { val[SCC[i][j]] = 0;
visited[n] = 1; val[SCC[i][j] ^ 1] = 1;
scc.push_back(n); }
tmpvisited[SCC[i][j]] = 1;
for (int i = 0;i < GT[n].size(); ++i) { }
int curr = GT[n][i]; }
if (visited[curr]) continue; return 1;
dfs2(curr, scc); }
}
} // Example of use

int main() {
void kosaraju() {
memset(visited, 0, sizeof visited); int m, u, v, nc = 0, t; cin >> t;
// n = "nodes" number, m = clauses number
for (int i = 0; i < 2 * n ; ++i) {
if (!visited[i]) dfs1(i); while (t--) {
} cin >> m >> n;
Ftime.clear();
memset(visited, 0, sizeof visited); SCC.clear();
for (int i = Ftime.size() - 1; i >= 0; i--) { for (int i = 0; i < 2 * n; ++i) {
if (visited[Ftime[i]]) continue; G[i].clear();
vector<int> _scc; GT[i].clear();
dfs2(Ftime[i],_scc); }
SCC.push_back(_scc);
} // (a1 v a2) = (a1 -> a2) = ( a2 -> a1)
} for (int i = 0; i < m ; ++i) {
cin >> u >> v;
/** int t1 = abs(u) - 1;
int t2 = abs(v) - 1;

32
int p = t1 * 2 + ((u < 0)? 1 : 0); arethe base p expansions of m and n respectively. This uses the convention that
m
n = 0 if m ≤ n.
int q = t2 * 2 + ((v < 0)? 1 : 0);
G[p ^ 1].push_back(q);
G[q ^ 1].push_back(p);
GT[p].push_back(q ^ 1); 5.2 big mod
GT[q].push_back(p ^ 1);
} ll big_mod( ll b, ll p ) {
ll ret = 1;
vector<int> val(2 * n, -1); for( ; p; p >>= 1 ) {
cout << "Case " << ++nc <<": "; if( p&1 ) ret = ( ret * b ) % mod;
if (two_sat(val)) { b = ( b * b ) % mod;
cout << "Yes" << endl; }
vector<int> sol; return ret % mod;
for (int i = 0; i < 2 * n; ++i) }
if (i % 2 == 0 and val[i] == 1)
sol.push_back(i / 2 + 1); ll mod_inv( ll b ) {
cout << sol.size() ; return big_mod( b, mod - 2 );
}
for (int i = 0; i < sol.size(); ++i) {
cout << " " << sol[i];
}
cout << endl; 5.3 catalan
} else {
cout << "No" << endl; unsigned long int catalan(unsigned int n){
} // Base case
} if (n <= 1) return 1;
return 0;
} // catalan(n) is sum of catalan(i)*catalan(n-i-1)
unsigned long int res = 0;
for (int i=0; i<n; i++)
res += catalan(i)*catalan(n-i-1);
5 Math
return res;
5.1 Lucas theorem }

For non-negative integers m and n and a prime p, the following congruence rela-
tion holds: :
  Y
m
k 
mi
 5.4 convolution
≡ (mod p),
n i=0
ni
typedef long long int LL;
where : typedef pair<LL, LL> PLL;
m = mk pk + mk−1 pk−1 + · · · + m1 p + m0 ,
inline bool is_pow2(LL x) {
and : return (x & (x-1)) == 0;
n = nk pk + nk−1 pk−1 + · · · + n1 p + n0 }

33
}
inline int ceil_log2(LL x) { return ans;
int ans = 0; }
--x;
while (x != 0) {
x >>= 1;
ans++; 5.5 crt
}
return ans; /**
} * Chinese remainder theorem.
* Find z such that z % x[i] = a[i] for all i.
/* Returns the convolution of the two given vectors in time proportional * */
to n*log(n). long long crt(vector<long long> &a, vector<long long> &x) {
* The number of roots of unity to use nroots_unity must be set so that long long z = 0;
the product of the first long long n = 1;
* nroots_unity primes of the vector nth_roots_unity is greater than the for (int i = 0; i < x.size(); ++i)
maximum value of the n *= x[i];
* convolution. Never use sizes of vectors bigger than 2^24, if you need
to change the values of for (int i = 0; i < a.size(); ++i) {
* the nth roots of unity to appropriate primes for those sizes. long long tmp = (a[i] * (n / x[i])) % n;
*/ tmp = (tmp * mod_inv(n / x[i], x[i])) % n;
vector<LL> convolve(const vector<LL> &a, const vector<LL> &b, int z = (z + tmp) % n;
nroots_unity = 2) { }
int N = 1 << ceil_log2(a.size() + b.size());
vector<LL> ans(N,0), fA(N), fB(N), fC(N); return (z + n) % n;
LL modulo = 1; }
for (int times = 0; times < nroots_unity; times++) {
fill(fA.begin(), fA.end(), 0);
fill(fB.begin(), fB.end(), 0);
for (int i = 0; i < a.size(); i++) fA[i] = a[i]; 5.6 cumulative sum of divisors
for (int i = 0; i < b.size(); i++) fB[i] = b[i];
LL prime = nth_roots_unity[times].first;
/**
LL inv_modulo = mod_inv(modulo % prime, prime);
The function SOD(n) (sum of divisors) is defined
LL normalize = mod_inv(N, prime);
as the summation of all the actual divisors of
ntfft(fA, 1, nth_roots_unity[times]);
an integer number n. For example,
ntfft(fB, 1, nth_roots_unity[times]);
for (int i = 0; i < N; i++) fC[i] = (fA[i] * fB[i]) % prime;
SOD(24) = 2+3+4+6+8+12 = 35.
ntfft(fC, -1, nth_roots_unity[times]);
for (int i = 0; i < N; i++) {
The function CSOD(n) (cumulative SOD) of an integer n, is defined as
LL curr = (fC[i] * normalize) % prime;
below:
LL k = (curr - (ans[i] % prime) + prime) % prime;
k = (k * inv_modulo) % prime;
csod(n) = \sum_{i = 1}^{n} sod(i)
ans[i] += modulo * k;
}
It can be computed in O(sqrt(n)):
modulo *= prime;
*/

34
long long csod(long long n) { void ext_euclid(long long a, long long b, long long &x, long long &y,
long long ans = 0; long long &g) {
for (long long i = 2; i * i <= n; ++i) { x = 0, y = 1, g = b;
long long j = n / i; long long m, n, q, r;
ans += (i + j) * (j - i + 1) / 2; for (long long u = 1, v = 0; a != 0; g = a, a = r) {
ans += i * (j - i); q = g / a, r = g % a;
} m = x - u * q, n = y - v * q;
return ans; x = u, y = v, u = m, v = n;
} }
}

5.7 discrete logarithm


5.9 fibonacci properties
Let A, B and n be integer numbers.
// Computes x which a ^ x = b mod n.
k =A−B (1)
long long d_log(long long a, long long b, long long n) {
long long m = ceil(sqrt(n));
long long aj = 1; FA FB = Fk+1 FA2 + Fk FA FA−1 (2)
map<long long, long long> M;
for (int i = 0; i < m; ++i) { n
X
if (!M.count(aj)) Fi2 = Fn+1 Fn (3)
M[aj] = i; i=0
aj = (aj * a) % n; ev(n) = returns 1 if n is even.
}
n
X
2
long long coef = mod_pow(a, n - 2, n); Fi Fi+1 = Fn+1 − ev(n) (4)
coef = mod_pow(coef, m, n); i=0
// coef = a ^ (-m)
n n−1
long long gamma = b; X X
for (int i = 0; i < m; ++i) { Fi Fi−1 = Fi Fi+1 (5)
i=0 i=0
if (M.count(gamma)) {
return i * m + M[gamma];
} else { 5.10 highest exponent factorial
gamma = (gamma * coef) % n;
} int highest_exponent(int p, const int &n){
} int ans = 0;
return -1; int t = p;
} while(t <= n){
ans += n/t;
t*=p;
5.8 ext euclidean }
return ans;

35
} return true;
}

5.11 miller rabin


5.12 mod inv
const int rounds = 20;
long long mod_inv(long long n, long long m) {
// checks whether a is a witness that n is not prime, 1 < a < n long long x, y, gcd;
bool witness(long long a, long long n) { ext_euclid(n, m, x, y, gcd);
// check as in Miller Rabin Primality Test described if (gcd != 1)
long long u = n - 1; return 0;
int t = 0; return (x + m) % m;
while (u % 2 == 0) { }
t++;
u >>= 1;
} 5.13 mod mul
long long next = mod_pow(a, u, n);
if (next == 1) return false;
long long last; // Computes (a * b) % mod
for (int i = 0; i < t; ++i) { long long mod_mul(long long a, long long b, long long mod) {
last = next; long long x = 0, y = a % mod;
next = mod_mul(last, last, n); while (b > 0) {
if (next == 1) { if (b & 1)
return last != n - 1; x = (x + y) % mod;
} y = (y * 2) % mod;
} b /= 2;
return next != 1; }
} return x % mod;
}

// Checks if a number is prime with prob 1 - 1 / (2 ^ it)


// D(miller_rabin(99999999999999997LL) == 1);
// D(miller_rabin(9999999999971LL) == 1);
5.14 mod pow
// D(miller_rabin(7907) == 1);
bool miller_rabin(long long n, int it = rounds) { // Computes ( a ^ exp ) % mod.
if (n <= 1) return false; long long mod_pow(long long a, long long exp, long long mod) {
if (n == 2) return true; long long ans = 1;
if (n % 2 == 0) return false; while (exp > 0) {
for (int i = 0; i < it; ++i) { if (exp & 1)
long long a = rand() % (n - 1) + 1; ans = mod_mul(ans, a, mod);
if (witness(a, n)) { a = mod_mul(a, a, mod);
return false; exp >>= 1;
} }
} return ans;

36
} LL x = (a[j] - a[k] + prime) % prime;
a[j] = (a[j] + a[k]) % prime;
a[k] = (w * x) % prime;
}
5.15 number theoretic transform w = (w * basew) % prime;
}
basew = (basew * basew) % prime;
typedef long long int LL;
}
typedef pair<LL, LL> PLL;
int i = 0;
for (int j = 1; j < n - 1; j++) {
/* The following vector of pairs contains pairs (prime, generator)
for (int k = n >> 1; k > (i ^= k); k >>= 1);
* where the prime has an Nth root of unity for N being a power of two.
if (j < i) swap(a[i], a[j]);
* The generator is a number g s.t g^(p-1)=1 (mod p)
}
* but is different from 1 for all smaller powers */
}
vector<PLL> nth_roots_unity {
{1224736769,330732430},{1711276033,927759239},{167772161,167489322},
{469762049,343261969},{754974721,643797295},{1107296257,883865065}};

PLL ext_euclid(LL a, LL b) {
5.16 pollard rho factorize
if (b == 0)
return make_pair(1,0); long long pollard_rho(long long n) {
pair<LL,LL> rc = ext_euclid(b, a % b); long long x, y, i = 1, k = 2, d;
return make_pair(rc.second, rc.first - (a / b) * rc.second); x = y = rand() % n;
} while (1) {
++i;
//returns -1 if there is no unique modular inverse x = mod_mul(x, x, n);
LL mod_inv(LL x, LL modulo) { x += 2;
PLL p = ext_euclid(x, modulo); if (x >= n) x -= n;
if ( (p.first * x + p.second * modulo) != 1 ) if (x == y) return 1;
return -1; d = __gcd(abs(x - y), n);
return (p.first+modulo) % modulo; if (d != 1) return d;
} if (i == k) {
y = x;
k *= 2;
//Number theory fft. The size of a must be a power of 2 }
void ntfft(vector<LL> &a, int dir, const PLL &root_unity) { }
int n = a.size(); return 1;
LL prime = root_unity.first; }
LL basew = mod_pow(root_unity.second, (prime-1) / n, prime);
if (dir < 0) basew = mod_inv(basew, prime);
for (int m = n; m >= 2; m >>= 1) { // Returns a list with the prime divisors of n
int mh = m >> 1; vector<long long> factorize(long long n) {
LL w = 1; vector<long long> ans;
for (int i = 0; i < mh; i++) { if (n == 1)
for (int j = i; j < n; j += m) { return ans;
int k = j + mh; if (miller_rabin(n)) {

37
ans.push_back(n); 5.18 sigma function
} else {
long long d = 1; the sigma function is defined as:
while (d == 1) X
d = pollard_rho(n); σx (n) = dx
vector<long long> dd = factorize(d); d|n
ans = factorize(n / d);
when x = 0 is called the divisor function, that counts the number of positive
for (int i = 0; i < dd.size(); ++i)
ans.push_back(dd[i]); divisors of n.
} Now, we are interested in find
return ans; X
} σ0 (d)
d|n

if n is written as prime factorization:


k
Y
n= Piek
5.17 sievephi i=1

we can demonstrate that:


void sievephi() k
{ X Y
σ0 (d) = g(ek + 1)
mark[1] = 1;
d|n i=1
for(ll i = 1; i < Mx; i++)
{ where g(x) is the sum of the first x positive numbers:
phi[i] = i;
if(!(i & 1)) mark[i] = 1, phi[i] /= 2; g(x) = (x ∗ (x + 1))/2
}
mark[2] = 0;
5.19 sigma
for(ll i = 3; i < Mx; i+=2)
{ /// for i to n
if(!mark[i]) /// ( pi^(xi+1) - 1 ) / ( pi - 1 ) )
{
phi[i] = phi[i] - 1; ll ans() {
for(ll j = 2 * i; j < Mx; j += i) ll ret = 1, cnt, h;
{ for( int i=0; i < prime.size() && prime[i]*prime[i] <= n; i++ ) {
mark[j] = 1; if( n % prime[i] == 0 ) {
phi[j] /= i; cnt = 0;
phi[j] *= i - 1; while( n % prime[i] == 0 ) {
} n /= prime[i];
} cnt++;
} }
} cnt *= m;
cnt++;

38
h = ( ( ( big_mod( prime[i], cnt ) - 1LL + mod ) % mod ) * 6 Matrix
big_mod( prime[i] - 1LL, mod - 2LL ) ) % mod;
ret = ( ret * h ) % mod;
// cerr << ( big_mod( prime[i], cnt + 1 ) - 1 ) << " - " << ( 6.1 Gaussin Elimation
powl( prime[i], cnt + 1 ) - 1 ) << "\n";
}
/**
}
if( n > 1 ) {
GAUSSIAN ELIMINATION
h = ( ( ( big_mod( n, m + 1LL ) - 1LL + mod ) % mod ) * big_mod( n
- 1LL, mod - 2LL ) ) % mod;
Definition:
ret = ( ret * h ) % mod;
** for equation -> a1X + b1Y + c1Z = d1
}
** for equation -> a2X + b2Y + c2Z = d2
return ret % mod;
** for equation -> a3X + b3Y + c3Z = d3
}
** for equation -> a4X + b4Y + c4Z = d4
we store the d in the B array
we store the a, b, c in the A array
5.20 totient sieve
**/
for (int i = 1; i < MN; i++)
phi[i] = i;
double A[101][101], B[101];
for (int i = 1; i < MN; i++)
if (!sieve[i]) // is prime
void gausian(int n) {
for (int j = i; j < MN; j += i)
int i,j,k,l,e,mxIdx;
phi[j] -= phi[j] / i;
double x,y,cns,cnt,ans,mxV;
for (i=0,e=0; i<n; i++) {
mxV = A[e][e];
5.21 totient mxIdx = e;
for (j=e; j<n; j++) {
if (fabs(A[j][i]) > fabs(mxV)) {
long long totient(long long n) { mxV = A[j][i];
if (n == 1) return 0; mxIdx = j;
long long ans = n; }
for (int i = 0; primes[i] * primes[i] <= n; ++i) { }
if ((n % primes[i]) == 0) { if (mxV < EPS) {
while ((n % primes[i]) == 0) n /= primes[i]; e++;
ans -= ans / primes[i]; continue;
} }
} for (j=0; j<n; j++) {
if (n > 1) { swap(A[mxIdx][j],A[e][j]);
ans -= ans / n; }
} swap(B[mxIdx],B[e]);
return ans; for (k=0; k<n; k++) A[e][k] /= mxV;
} B[e] /= mxV;

39
for (j=0; j<n; j++) { for (int i = 0; i < mat_sz; i++) {
if (j != e) { for (int j = 0; j < mat_sz; j++) {
x = A[j][e]; tmp.a[i][j] = a[i][j] + b.a[i][j];
if (fabs(x) >= EPS) { if (tmp.a[i][j] >= mod) {
for (k=0; k<n; k++) { tmp.a[i][j] -= mod;
A[j][k] -= (A[e][k] * x); }
} }
B[j] -= (B[e] * x); }
} return tmp;
} }
} Matrix operator * (const Matrix &b) const {
e++; Matrix tmp;
} tmp.clear();
for (i=99; i>=0; i--) { for (int i = 0; i < mat_sz; i++) {
for (j=99; j>=0; j--) { for (int j = 0; j < mat_sz; j++) {
if (fabs(A[i][j]) >= EPS && i != j) { for (int k = 0; k < mat_sz; k++) {
B[i] -= (A[i][j] * B[j]); tmp.a[i][k] += (long long)a[i][j] * b.a[j][k] % mod;
} if (tmp.a[i][k] >= mod) {
if (i == j && fabs(A[i][j]) >= EPS) { tmp.a[i][k] -= mod;
B[i] /= (A[i][j]); }
} }
} }
} }
} return tmp;
}
Matrix pw(int x) {
Matrix ans, num = *this;
6.2 Matrix Expo ans.one();
while (x > 0) {
if (x & 1) {
struct Matrix {
ans = ans * num;
const int mat_sz = 2;
}
int a[mat_sz][mat_sz];
num = num * num;
void clear() {
x >>= 1;
memset(a, 0, sizeof(a));
}
}
return ans;
void one() {
}
for( int i=0; i<mat_sz; i++ ) {
};
for( int j=0; j<mat_sz; j++ ) {
mat[i][j] = i == j;
}
}
} 7 Misc
Matrix operator + (const Matrix &b) const {
Matrix tmp; 7.1 IO
tmp.clear();

40
}
inline int RI() { }
int ret = 0, flag = 1,ip = getchar(); }
for(; ip < 48 || ip > 57; ip = getchar()) { }
if(ip == 45) {
flag = -1; int kmp( char *s, char *m ) {
ip = getchar(); hash_table( m );
break; int i = 0, j = 0;
} int ans = 0;
} while( i < lenA ) {
for(; ip > 47 && ip < 58; ip = getchar()) while( i < lenA && j < lenB && s[i] == m[j] ) {
ret = ret * 10 + ip - 48 ; i++;
return flag * ret; j++;
} }
if( j == lenB ) {
j = table[ j - 1 ];
ans++;
8 String } else if( i < lenA && s[i] != m[j] ) {
if( j ) {
8.1 KMP j = table[ j - 1 ];
} else {
i++;
/// complexity : o( n + m )
}
///solution reference loj 1255 Substring Frequency
}
#include <bits/stdc++.h>
}
using namespace std;
return ans;
}
int t;
const int mx = 1e6 + 10;
int main() {
char a[mx], b[mx];
#ifdef LU_SERIOUS
int table[mx], lenA, lenB;
freopen("in.txt", "r", stdin);
#endif // LU_SERIOUS
void hash_table( char *s ) {
scanf( "%d", &t );
table[ 0 ] = 0;
for(int cs=1; cs<=t; cs++) {
int i = 1, j = 0;
lenA = 0; lenB = 0;
while( i < lenB ) {
scanf("%s", &a);
if( s[i] == s[j] ) {
scanf("%s", &b);
j++;
lenA = strlen( a );
table[ i ] = j;
lenB = strlen( b );
i++;
printf( "Case %d: %d\n", cs, kmp( a, b ) );
} else {
}
if( j ) {
return 0;
j = table[ j - 1 ];
}
} else {
table[ i ] = 0;
i++;

41
8.2 Manacher while ( R < n && s[R-L] == s[R] ) R++;
z[i] = R-L; R--;
}
int call(char *inp,char *str,int *F,vector< pair<int,int> > &vec){
}
//inp is the actual string
}
//str is the modified string with double size of inp
int maxz = 0, res = 0;
//F[i] contains the length of the palindrome centered at index i
for ( int i = 1; i < n; i++ ) {
//Every element of vec cointains starting and ending positions of
if ( z[i] == n-i && maxz >= n-i ) { res = n-i; break; }
palindromes
maxz = max( maxz, z[i]) ;
int len=0;
}
str[len++]=’*’;
for(int i=0; inp[i]; i++){
str[len++]=inp[i];
str[len++]=’*’;
}
str[len]=’\0’;
int c=0,r=0,ans=0;
for(int i=1; i < len-1; i++){
int _i=c-(i-c);
if(r > i) F[i]=min(F[_i],r-i);
else F[i]=0;
while(i-1-F[i]>=0 && str[i-1-F[i]]==str[i+1+F[i]]) {
F[i]++;
}
if(i+F[i] > r) r=i+F[i],c=i;
ans=max(ans,F[i]);
vec.push_back(make_pair(i-F[i],i+F[i]));
}
return ans;
}

8.3 Z algo

int L = 0, R = 0;
for( int i = 1; i < n; i++ ) {
if ( i > R ) {
L = R = i;
while ( R < n && s[R-L] == s[R] ) R++;
z[i] = R-L; R--;
} else {
int k = i-L;
if ( z[k] < R-i+1 ) z[i] = z[k];
else {
L = i;

42

You might also like