Math Modelization HMU HANDOUT
Math Modelization HMU HANDOUT
Math Modelization HMU HANDOUT
Georges KOEPFLER
Georges.Koepfler@parisdescartes.fr
1 Introduction
4 Occlusion in video
5 Video segmentation
80
85
90
95
100
105
110
115
120
125
70 80 90 100 110 120 130 140 150
discrete grid
95
126 132 118 114 104 105 89 71 70 64 68 65 65 65 73 79 88 77 85 76
122 124 113 93 89 77 66 46 60 56 68 75 72 70 71 76 88 78 96 84
117 107 102 88 58 46 33 34 42 47 61 78 89 76 91 99 90 92 82 96
105 96 93 58 13 15 18 16 22 26 33 46 64 65 88 109 127 115 84 83
85 100 70 17 8 11 11 13 10 13 18 24 34 44 61 88 123 144 103 84
100
61 94 38 4 5 8 14 15 27 48 41 30 40 49 60 69 88 129 132 99
52 89 32 6 8 15 24 30 42 66 139 42 72 59 96 101 64 90 117 122
250
38 73 36 10 14 13 22 30 104 88 204 91 132 56 127 140 99 75 86 119
39 61 56 13 20 31 39 38 128 95 144 104 147 77 143 167 128 100 90 95
200
37 55 46 7 22 33 62 56 94 59 77 75 146 150 191 170 152 133 114 92
105
26 30 27 12 28 53 73 64 78 64 78 84 120 172 232 194 194 152 128 116
150
22 21 19 22 35 55 95 71 75 73 76 85 102 168 215 223 237 144 118 136
35 44 44 44 45 55 102 101 68 76 70 93 128 171 203 195 221 121 120 146
44 43 47 76 50 61 95 127 83 77 72 97 143 142 148 137 145 109 127 152 100
x2 + y2 2 2
− x +y
u(x, y ) = α 1 − e 2β .
β
1.4
1
1.2
z 50
1
0.8
0.8
z=f(x,y)
z=f(x,y) 0.6
0.6 100
0.4
0.2
0.4
0
150
0.2 −0.2
y
x −0.4
0 −0.6
2 (x,y)
(x,y) 2
200
O
1 2 1
1 2
0 0 1.5
0 1
0.5
−1 −1 0
−1 −0.5 250
−1
−2 −1.5
−2 −2 −2 50 100 150 200 250
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 9
Modeling crowd density
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 10
Crowd density
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 11
Curvature
We propose here to use curvature of isophotes in order to
characterize an image with a lot of people.
Definitions
Let X (s) ∈ R2 be a plane curve parameterized by arc length.
Thus the tangent vector T (s) = X 0 (s) is unitary: kT (s)k = 1.
With N(s) the normal unit vector we get the Frenet frame, an
orthonormal basis spanning R2 .
dT
One has (s) = κ(s)N(s),
ds
where κ(s) is the signed curvature of the curve X .
x(s)
If X (s) = then
y (s)
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 12
Curvature of isophotes
But how obtain the contour lines, compute κ(s), from an image?
Let u : Ω ⊂ R2 −→ R be the image function.
Then the isophotes are defined as
Ic = {(x, y ) / u(x, y ) = c}
It it is easy to show that the scalar product < ∇u(x, y ), X 0 (s) >= 0
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 13
Curvature of isophotes
x (s)
0
0
4 Now T (s) = X (s) = with kT k2 = (x 0 )2 + (y 0 )2 = 1
y 0 (s)
y (s)
0
and N(s) = with kNk = 1 and T ⊥ N.
−x 0 (s)
5 Thus as ∇u ⊥ T , we have ∇u = αN:
ux αy 0
q
= , with α = k∇uk = ux2 + uy2 .
uy −αx 0
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 14
Curvature of isophotes
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 15
Curvature in numerical images
Using finite difference schemes makes it thus possible to compute
the curvature of the isophotes of an image.
A regularization/smoothing is needed in order to overcome the
non smoothness of the numerical grid.
A possibility is to smooth using for example the mean curvature
pde
∂u
= k∇ukcurv(u)
∂t
This gives regularization at high curvature points, e.g. corners.
The finite difference scheme may be written
u n = u n−1 + δtk∇u n−1 kcurv(u n−1 )
We compute absolute value of curvature of a smoothed image:
Curv(u n ) = |curv(u n )|
X
x,y
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 16
G. Koepfler Math. modelization Curvature and crowd density HMU - June 2018 17
Modelization of camera movement
For illustration we will apply the model to some video data. For
this approximations of the theoretical model have to be done.
M (X,Y,Z)
M (X , Y , Z ) in (C, i, j, k );
R
c m (x,y) (c, i, j) basis of R;
fc
k (i, j, k ) is an orthonormal basis of R
j
C
i
m projection of M onto R.
X Y
x = fc , y = fc
Z Z
X /Z
x fc 0 0 0
y = 0 fc 0 0 Y /Z
1
1 0 0 1 0
| {z } 1/Z
P
u mu 0 x uo
= + .
v 0 mv y vo
mu 0 uo fc 0 0 0
R t
P= 0 mv vo 0 fc 0 0
0 1
0 0 1 0 0 1 0
c’
R(j) displacement D= (R, t),
R(k) −−→
m’
C’
translation t = CC 0 ,
R
c m
rotation R, axis containing C,
k R(i) M 0 = RM + t.
C
j
(C, i, j, k ) transformed into
i (C 0 , R(i), R(j), R(k ));
a1 b1 c1 t1 X
0 0 0
X a2 + Y b2 + Z c2 = − t2 + Y
a3 b3 c3 t3 Z
X t1 X X X t1
0 0
t t
R Y
0 = − t2 + Y ⇔ Y
0 = R Y − R t2
Z0 t3 Z Z0 Z t3
G. Koepfler Math. modelization Modelling camera movement HMU - June 2018 30
Camera Motion: image transformation
Thus
X0 a1 X + a2 Y + a3 Z − (a1 t1 + a2 t2 + a3 t3 )
x0 = 0 =
Z c1 X + c2 Y + c3 Z − (c1 t1 + c2 t2 + c3 t3 )
X0 b1 X + b2 Y + b3 Z − (b1 t1 + b2 t2 + b3 t3 )
y0 = 0 =
Z c1 X + c2 Y + c3 Z − (c1 t1 + c2 t2 + c3 t3 )
and dividing by Z :
a1 t1 +a2 t2 +a3 t3
a1 x + a2 y + a3 − Z (x,y )
x0 =
c1 t1 +c2 t2 +c3 t3
c1 x + c2 y + c3 −
Z (x,y )
b1 t1 +b2 t2 +b3 t3
b1 x + b2 y + b3 −
Z (x,y )
y = c x +c y +c −
0
c1 t1 +c2 t2 +c3 t3
1 2 3 Z (x,y )
t2
a2 x 0 + b2 y 0 + c2 +
Y
Z 0 (x 0 ,y 0 )
y=
= t3
Z
a3 x 0 + b3 y 0 + c3 +
Z 0 (x 0 ,y 0 )
a1 x 0 + b1 y 0 + c1 + t1 /Z 0 (x 0 , y 0 ) a2 x 0 + b2 y 0 + c2 + t2 /Z 0 (x 0 , y 0 )
=f ,
a3 x 0 + b3 y 0 + c3 + t3 /Z 0 (x 0 , y 0 ) a3 x 0 + b3 y 0 + c3 + t3 /Z 0 (x 0 , y 0 )
f (x, y ) = g(x 0 , y 0 ) = g ψ(x, y )
Note: C(L), C 0 (L) are constants depending the image plane size.
G. Koepfler Math. modelization Modelling camera movement HMU - June 2018 34
Camera Motion: depth approximation
In practice, if the filmed objects are rather far from the camera, the
hypothesis of the theorem are valid.
where et = t/Z0 .
Thus ϕ is a projective transform with matrix
a1 b1 c1 + et1
Mϕ = a2 b2 c2 + et2
a3 b3 c3 + et3
with et = t/Z0 .
Thus ψ is a projective transform with matrix
a1 a2 a3 − het, R(i)i
Mψ = b1 b2 b3 − het, R(j)i
c1 c2 c3 − het, R(k )i
Mψ = b1 b2 b3 0 1 −et2 = R −1 H.
e
c1 c2 c3 0 0 1 − et3
The factorization Mψ = R −1 H
e is unique.
G. Koepfler Math. modelization Modelling camera movement HMU - June 2018 37
Projective group : definition and properties
n
PG2 (R) = φ : R2 → R2 such that ∀(x, y ) ∈ R2 ,
α1 β1 γ1
α1 x + β1 y + γ1 α2 x + β2 y + γ2
o
φ(x, y ) =
, , with α2 β2 γ2 6= 0 .
α3 x + β3 y + γ3 α3 x + β3 y + γ3 α3
β3 γ3
a3 b3 c3 C
ϕ ∈ A ←→ displacement D = (R, t)
ϕ−1 = ψ ∈ A ←→ displacement D −1 = (R t , −R t t)
Composition of displacements D = D1 ◦ D2
for D1 = (R1 , t1 ) and D2 = (R2 , t2 ) :
D = D1 ◦ D2 = (R1 R2 , R1 t2 + t1 ) .
R1 t1 R2 t2 R1 R2 t1 + R1 t2
=
0 1 0 1 0 1
The registration group is (A, ?): the law ? is deduced from the
composition law ◦ of displacements through the isomorphism
defined above.
R(k)
k
R = R2 R1
R(j)
j R1 axis ∆ ∈ (C, i, j)
C
R(i) parameters θ and α
i
R1 = Rθ,α = Rθk Rαi R−θ
k
R R
projective deformation of f
1 2
R2 axis R(k ) = R1 (k )
k
R(k) α R(k)
R(j)
R 1 (j) parameter β
C j
C
R2 = Rβ = Rθk Rαi Rβk R−α
i Rk
−θ
i
θ
β R(i)
plane rotation of R1 (f )
R (i)
1
∆
!
a1 x 0 + b1 y 0 + c1 − t̃1 a2 x 0 + b2 y 0 + c2 − t̃2
ϕ(x 0 , y 0 ) = , = (x, y )
a3 x 0 + b3 y 0 + c3 − t̃3 a3 x 0 + b3 y 0 + c3 − t̃3
a1 x + a2 y + a3 + A b1 x + b2 y + b3 + B
ψ(x, y ) = , = (x 0 , y 0 )
c1 x + c2 y + c3 + C c1 x + c2 y + c3 + C
New parameters:
With
DFD(Θ,ξ) (x, y ) = g (x, y ) + uΘ (x, y ) − f (x, y ) + ξ ,
using robust statistical estimator.
G. Koepfler Math. modelization Modelling camera movement HMU - June 2018 48
Example 1
f g1/2(f + g)
β A B C θ α
initial -0,051 2 1 -0,01 0,8 0,0005
estim. -0,05113 2,812 0,075 0,98821 π/4 =0,785 0,0004
I1
Film incrustation
Film simulation
F. Dibos, C. Jonchery, G. Koepfler, Camera motion estimation through planar deformation determination, in Journal of
Mathematical Imaging and Vision, vol 32, 1, p. 73-87, September 2008.
C. Jonchery, PhD dissertation, Estimation d’un mouvement de caméra et problèmes connexes. ENS Cachan, November
2006.
F. Dibos, Du groupe projectif au groupe des recalages : une nouvelle modélisation, Comptes Rendus à l’Académie des
Sciences, 2001.
O. Faugeras, Q.-T. Luong, T. Papadopoulo, The Geometry of Multiple Images, MIT Press, Cambridge, 2000.
J.M. Odobez, P. Bouthemy, Robust Multiresolution Estimation of Parametric Motion Models, Journal of Visual
Communication and Image Representation, vol 6, 4, p. 348-365, 1995;
Layer decomposition
An image of a natural scene is obtained by projection of the 3D scene.
This is modeled by the superposition of several layers.
An occlusion appears if a layer is projected upon another.
= + +
occluding occluded
image = + + background
person person
Problem:
extract and reconstruct layers from a video sequence;
even if total occlusion occurs during several frames;
propose a mathematical model for layers.
⇒ in front of
Definition:
image I defined on the domain D;
layer of 3d moving object: (Ω, o)
- Ω ⊂ D region where the object is projected if no occlusions;
- o gray level function defined on Ω, giving the object’s gray level.
Three layers:
background layer (D, B);
occluded object layer (Ωi , oΩi );
occluding object layer (Oi , oOi ).
oOi (x)
if x ∈ Oi ;
Ii (x) ≈ o (x) if x ∈ Ωi \ Oi ;
Ωi
B(x) else.
Hypotheses:
fixed camera and known background B;
no occlusion in first image;
moving objects are rigid;
movement is a uniform translation in 3d space.
Thus:
consider only short sequences;
parametric deformation deduced from 3d motion and not from 2d.
x0 a
= 1
cti +1 + ti .
y0 b
Energy
XZ
E = ρ ∆2i (x) dx
i>0 Oi
XZ
+ ρ ∆1i (x) dx
i>0 Ωi \Oi
XZ
+ ρ ∆0i (x) dx
i>0 D\(Oi ∪Ωi )
Z Z
0
+ λ g (|∇I0 |) ds + λ g (|∇I0 |) ds ,
∂Ω0 ∂O0
√
with robust estimator, e.g. ρ(s) = 2 + s2 .
Energy (after change of variables)
G. Koepfler Z Math. modelization Occlusion in video HMU - June 2018 66
X
Variational model: Minimization
Initialisation
get regions R1 and R2 : compare I0 and B;
parameters (a, b, c, a0 , b0 , c 0 ):
use parametric optical flow between I0 and I1 .
Depth order:
test (Ω0 , O0 ) = (R1 , R2 ) versus (Ω0 , O0 ) = (R2 , R1 )
and keep the one with the lowest energy E.
Iterate:
Minimize on regions Ω0 and O0 with I.C.M. ;
Minimize on parameters (a, b, c, a0 , b0 , c 0 ) with simplex method.
original sequence
restored sequence
Office Sequence
Outdoor Sequence.
Purpose:
Detection of moving objects/people in a video sequence.
We present a simple and fast method with few parameters and low
complexity.
A major hypothesis is that the background is fixed.
This is a common assumption in video surveillance applications.
Again our purpose is to illustrate the use of mathematical
modelling for a precise problem.
The topic of video surveillance has attracted a lot of attention the
last years and a lot of approaches have been proposed.
Constraints:
real time ;
on line processing ;
few parameters.
The white points indicate objects in movement, but due to noise they
are distributed almost everywhere in the image.
With different values for φ:
Moving object/person
= grouping of white pixels among the observable pixels
(white or black)
6= uniformly distributed white “noise” pixels.
Definitions
Let ∈ R∗+ , we say region Wi is -significant if
ki
B(p, ni , ki ) < and >p
N ni
As P(Wi is significant) = P B(p, ni , ki ) <
N
and k 7→ B(p, ni , k ) is decreasing, there exists K such that
B(p, ni , K ) < and B(p, ni , K − 1) ≥ .
N N
Thus
P(Wi is significant) = P(ki ≥ K ) = B(p, ni , K ) <
N
and !
X
E f (Wi ) <
i
.
The number of false alarms (detections in noise) is lower than .
The higher the value of S(W ), the lower the chance W is uniform
noise.
W W W’’
W’
W’
Algorithm:
1 In the initial list, find the most significant window Wmax , add it to
the list of maximal -significant windows and remove it from the
initial list;
2 delete all the windows W such that W ∩ Wmax 6= ∅;
3 repeat these two steps until there is no more -significant window
in the initial list.
G. Koepfler Math. modelization Video segmentation HMU - June 2018 91
Significance
S(W)
120
60
20
old
0
Label of W
1 2 3 4 5 6 7 8 9 10 11 12 13
Boundary Noise
Well placed
test sequence 1.
test sequence 2.
Metro sequence 1.
Metro sequence 2.
Comparison W4 and AWD.
⇒ No symmetric criterion.
Sequence caviar 2.
1 1 1 2 1 2
1 1 3 2 2 2
1 1 1 4 4 5 4 4 5
1 5 5