Mathematics For Economics (ECON 104)
Mathematics For Economics (ECON 104)
Mathematics For Economics (ECON 104)
104)
q1 = 100 − p1,
q2 = 100 − 2p2.
The cost function of the firm is
2
Rewrite the demand curves as
1
p1 = 100 − q1 and p2 = 50 − q2
2
The profit is then
3
The first-order condition is
This gives us
4
The second-order conditions are
00 = −2, π 00 = π 00 = 0, π 00 = −1.
π11 12 21 22
Thus
!
−2 0
H(q1, q2) = .
0 −1
00
and so π11 = −2 < 0 and |H(q1, q2)| = 2 > 0.
This means that H(q1∗ , q2∗ ) is negative definite (ND) and thus we
obtained the local maximum.
5
Sufficient Conditions for Global Optimum (Ch. 13.2)
6
Sufficient Conditions for Global Optimum (continued)
7
Convex set: When we draw a line segment connecting any two
points in a set, if all the points in the segment belong to the set,
then the set is called “convex.”
8
Summary: Conditions for Global max and Global min
• Single Variable:
Condition Max Min
1st-order Necessary f 0(x∗) = 0 f 0(x∗) = 0
f is concave ⇔ f is convex ⇔
2nd-order Sufficient
f 00(x) ≤ 0 for all x f 00(x) ≥ 0 for all x
9
Example: Once again, we revisit f (x, y) = −2x2 − 2xy − 2y 2 +
36x + 42y − 158.
10
The second-order conditions are
00 = −4, f 00 = f 00 = −2, f00
f11 12 21 22 = −4,
which generates
!
−4 −2
H(x, y) = .
−2 −4
00
Thus f11 = −4 < 0 and |H(x, y)| = 12 > 0 for all (x, y).
11
Optimization with Equality Constraint (Ch. 14.1)
12
The Lagrangian Method (14.1, 14.4)
The constraint set is the set of all pairs (x, y) for which g(x, y) =
c:
n o
2
(x, y) ∈ R g(x, y) = c
13
Heuristic Derivation of the Lagrangian Method (Ch. 14.4)
Consider
max f (x, y) subject to g(x, y) = c.
(x,y)∈R2
15
Since ∆y could be positive or negative, we must have
0
f1(x∗, y ∗) 0 0
− 0 g2(x , y ) + f2(x∗, y ∗) = 0.
∗ ∗ (∗∗)
g1(x∗, y ∗)
Define
0
f1(x∗, y ∗)
λ∗ ≡ 0 .
g1(x∗, y ∗)
Then, (∗∗) can be translated into:
0 0
f1(x∗, y ∗) = λ∗g1(x∗, y ∗),
0 0
f2(x , y ) = λ g2(x∗, y ∗).
∗ ∗ ∗
16
Define the Lagrangian function:
We set
L(x, y, λ) = xy − λ(2x + y − 100).
Compute the FOCs of L:
0
L1(x, y, λ) = y − 2λ = 0,
0
L2(x, y, λ) = x − λ = 0,
0
L3(x, y, λ) = 2x + y − 100 = 0.
19
What does this mean? The first-order necessary conditions
in max or min problem with an equality constraint are exactly
the first-order conditions in Lagrangian. Thus, we can use La-
grangian to find the stationary points in max or min problem.
y 3 = x2 ≥ 0 ⇒ y ≥ 0
⇒ −y ≤ 0.
Thus, f is maximized at y ∗ = 0 and from the constraint, x∗ = 0.
Note that (g10 , g20 ) = (−2x, 3y 2) = (0, 0) when (x∗, y ∗) = (0, 0).
L = −y − λ(y 3 − x2).
First-order conditions are
L01 = 2λx = 0,
L02 = −1 − 3λy 2 = 0,
y 3 − x2 = 0.
From the first equation, (i) λ = 0 or (ii) x = 0. (i) If λ = 0, then
the second equation does not hold. (ii) If x = 0, then the third
equation implies that y = 0, and once again the second equation
does not hold.
22
Example: Solve the following problem:
max or min x2y s.t. 2x2 + y 2 = 3
(x,y)∈R2 (x,y)∈R2
Set up
L(x, y, λ) = x2y − λ(2x2 + y 2 − 3).
First-order conditions are
24
From the first equation, we get
x = 0 or y = 2λ.
√ √
Case 1: If x = 0, by the constraint, y = 3 (λ = 0) or − 3
(λ = 0) from the second equation. Thus we have two solution
candidates:
√
x = 0, y = 3 and λ = 0
√
x = 0, y = − 3 and λ = 0.
25
Plugging this into the third equation (i.e., x2 = 1), we get
1
x = 1, y = 1 and λ =
2
1
x = −1, y = −1 and λ = −
2
1
x = 1, y = −1 and λ = −
2
1
x = −1, y = 1 and λ = .
2
26
Thus, we have six solution candidates that satisfy the first-order
conditions:
√ √
(x, y, λ) = 0, 3, 0 with f 0, 3 = 0
√ √
(x, y, λ) = 0, − 3, 0 with f 0, − 3 = 0
1
(x, y, λ) = 1, 1, with f (1, 1) = 1
2
1
(x, y, λ) = 1, −1, with f (1, −1) = −1
2
1
(x, y, λ) = −1, 1, with f (−1, 1) = 1
2
1
(x, y, λ) = −1, −1, − with f (−1, −1) = −1.
2
27
Constraint qualification: g10 = g20 = 0 occurs “only” at (0, 0).
But (0, 0) is not a feasible point.
28
Interpretation of Lagrange multiplier (λ) (Ch. 14.2)
29
Differentiate f ∗(c) with respect to c :
30
Now look at the constraint. Since
∗ ∗
g x (c), y (c) = c for all c,
Hence,
df ∗ (c)
= λ.
dc
31
The value of the Lagrange multiplier at the solution is the rate
of change in the optimal value of the objective function as
the constraint is relaxed by one unit.
32
Example: Consider a simple problem:
33
To show this using the Lagrangian method, set up
L = x2 − λ(x − c).
First-order conditions are
2x − λ = 0 and x = c.
Thus we obtain λ = 2c, which equals
df ∗(c)
.
dc
34
Sufficient Conditions for Local Optimum (Ch. 14.5)
We thus have
dy g10
=− 0.
dx g2
36
The objective function can then be written as a function of x:
F 00(x∗) < 0.
37
In other words, given F (x) = f (x, y(x)), it follows that, at the
point (x∗, y(x∗)) = (x∗, y ∗),
0 0 0 dy 0 0 g10
F = f1 + f2 = f1 − f2 0 .
dx g2
0
dy dy g1
F 00 = f11
00 + f 00
12 − f 00 + f 00
21 22
dx dx g20
h i
00
i h
g 0 g 00 + g (dy/dx) − g 0 g 00 + g 00 (dy/dx)
2 11 12 1 21 22
−f20
2
0
g2
38
00 1 h
2 0
00 0 0 00 0 2 00 i
F = 0 2 (g2 ) f11 − 2g1 g2 f12 − (g1 ) f22
(g2)
0 0
λg2 0 00 00 0 0 00 0 00 g1
− 0 g2g11 − g12g1 − g1g12 + g1g22 0
(g2)2 g2
1 h 0 2 00 0 0 00 0 2 00 i
= 0 2 (g2 ) f11 − 2g1 g2 f12 − (g1 ) f22
(g2)
λ h 0 2 00 0 0 00 0 2 00 i
− 0 (g2) g11 − 2g1g2g12 + (g1) g22 ,
(g2) 2
where FOCs require that f10 = λg10 and f20 = λg20 . By Young’s
00 = f 00 and g 00 = g 00 .
theorem, f12 21 12 21
39
We then obtain
2
00 − λg 00 0 − 2 f12 00 − λg 00 g 0 g 0
00 ∗ 1 f11 11 g2 12 1 2
F (x ) = 2 2
0 00
+ f22 − λg22 g100 0
g2
2
00
00
1 L11 g20 − 2 L12 g10 g20
= 2 00 2
0
g20 + L22 g1
H̄(x∗, y ∗)
= − 2 ,
g20
40
The term H̄(x∗, y ∗) is the determinant of a bordered Hessian
00
2 0
00 0 0 00 0 2
= −L11(g2) + 2L12g1g2 − L22(g1) .
Note that
H̄(x∗, y ∗)
F 00(x∗) < 0 ⇔ − 2 < 0 ⇔ H̄(x∗, y ∗) > 0.
g20
41
Theorem [Sufficiency for Local Optimum]: Consider the
problems:
42
Example: Consider the problem:
max xy subject to x + y = 6.
x,y
The Lagrangian is
L = xy − λ (x + y − 6)
The first-order conditions are
L01 = y − λ = 0
L02 = x − λ = 0
x + y = 6.
Thus (x∗, y ∗, λ∗) = (3, 3, 3).
43
Now we can check if (3, 3) is at least a local max point.
Thus, the point (3, 3) is a local max point and the local maximum
value is 3 × 3 = 9.
44
Example: Revisit the previous example:
45
Recall that the first-order conditions were
g10 = 4x
g20 = 2y
L0011 = 2y − 4λ
L0012 = L0021 = 2x
L0022 = −2λ.
46
The determinant of H̄(x, y) is then
0 4x 2y
H̄(x, y) = 4x 2y − 4λ 2x
−2λ
2y 2x
4x 2x 4x 2y − 4λ
= 4x(−1) 1+2 1+3
+ 2y(−1)
2y −2λ 2y 2x
√ √ √
Thus if (x, y, λ) = 0, 3, 0 , then |H̄(0, 3)| = −8 · 3 3 =
√ √
−24 3 < 0; thus 0, 3 is a local min point.
√ √ √ √
If (x, y, λ) = 0, − 3, 0 , then |H̄(0, − 3)| = 8·3 3 = 24 3 > 0;
√
thus 0, − 3 is a local max point.
47