0% found this document useful (0 votes)
27 views

Chapter IV. Constrained Optimization

Note

Uploaded by

abdurhamenaliyyi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Chapter IV. Constrained Optimization

Note

Uploaded by

abdurhamenaliyyi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter IV: Constrained Optimization

1. One variable constrained optimization with Non-

negative constraint

2. Two variables problems with equality constrains

3. Inequality constraints and the theorem of Kuhn-Tucker

4. The General Case: Mixed constraint


Introduction
• Under unconstrained optimization problems, no restrictions have been
made regarding the value of the choice variables.
 But in reality, optimization of a certain economic function should be in
line with certain resource requirement or availability.
 Which emanates from the problem of scarcity of resources.
 Maximization of production should be subject to the availability of
inputs.
 Minimization of costs should also satisfy a certain level of output.
 The other constraint in economics is the non negativity restriction.
 Constrained optimization deals with optimization of the
objective function (the function to be optimized) subject to
constraints (restrictions).
One Variable Constrained Optimization
1. With Equality Constraint:
• Optimization of one variable function subject to an
equality constraint takes the form;
•   : =  , .  ∶  =  
2. Non-negativity Constraint:
• (): = (),   ∶  ≥  .
Example:1
•  = − −  + 
•    ≥ .
• In unconstrained operation:
• F.O.C : !  = −" −  = 
•  = − ⁄"
• But imposing the non – negative constraint we have:
∗ !
•  =  and  = , and () = −
4.2 Two Variables Problems With Equality Constrains
• In the case of two choice variables, optimization problem
with equality constraint takes the form;
• / = (& ,  )
• Subject to = '(& ,  ) = (
• Example:
•  ) = (& ,  )
• Subject to  = *& & +* 
• Two ways to solve this
• Direct substitution method
• Lagrangian method
Direct Substitution Method
• Direct substitution method is used for a two variable optimization
problem with only one constraint.
• One variable is eliminated using substitution before calculating
the first-order condition.
• Consider the consumer problem in the above example
•  ) = (& ,  )
• Subject to  = *& & +* 
•  = *& & +* 
 *
• & = − 
*& *&
• Now & is expressed as a function of  .
• Substituting this value, we can eliminate & from the equation
Direct Substitution Method
•  ) = (& ( ),  ) • Example:
 F.O.C • ) = & 
+ , , ,& • S 
 = + ∗ & +- = &
+& , ,& ,
+ *
 = ) + )& − =
+& *&
* • ANS
 ) = )& • & = " &  = &/
*&
) *
 =  • ) = 0
)& *&
Direct Substitution Method: Example
• A firm faces the production function 1 = &2.- 3.- and can buy the inputs K
and L at prices per unit of Birr 40 and 5 respectively. If it has a budget of Birr 800
what combination of K and L should it use in order to produce the maximum
possible output.
• Solution
• The problem:
• Max 1 = &2.- 3.-, s.t 4 = -2 + /3
• By substituting in the budget constraint we have,
• 3 = &" − 42
• 1 = &2.- 3.- ≫≫≫
• 1 = &2.- (&" − 42).- ,   6+ *7
+1
• F.O.C becomes, =
+2
• Let U= &2.- + 8 = (&" − 42).-
Direct Substitution Method: Example
• Let U= &2.- + 8 = (&" − 42).-
+1 +
• +2
= +9 )8 = ,   *6+ 6:.
+
• )8 = )! 8 + 8! ) = 
+9
• ) = &(. -)2;." and
!

• 8! = . - −4 &" − 42 ;."
= −.  &" − 42 ;."

• Hence,
+1 +
• = +9 )8 = &(. -) (&" − 42).- + −.  &" − 42 ;."
&2.-
+2
+1 + -.4 (&";42).- 4.- 2.-
• = )8 = − , use common denominator
+2 +9 2." &";42 ."
+ -.4∗ &";42 ." &";42 .- ;4.-∗2." 2.- -.4∗ &";42 ;4.-2
• )8 = = =
+9 2." &";42 ." 2." &";42 ."
+1
• +2
= -. 4 ∗ &" − 42 − 4. -2 =  ≫≫ "4 − ". 42 = 
• ≫≫ 2 = & + 3 = &" − 42 = &" − 4 = 4
Lagrange Multiplier History

• Developed by Joseph-Louis Lagrange (1736–


1813)
• Italian-French mathematician and astronomer.
• One of the greatest mathematicians in history.
• Born in Italy, made his home in France and
died in Tulin aged 77.
Lagrange Multiplier Method
• When the constraint is a complicated function or when there are
several constraints, we resort to the method of Lagrange.
•  ) = (& ,  )
• Subject to  = *& & +* 
• < = (& ,  ) + =( − *& & −*  ), L denotes the
Lagrange.
• Steps
1. Rewrite the constraint and multiply by >
2. For the composite function (objective+constriant)
3. Apply F.O.C
4. Simplofiy F.O.C equations and solve for K and L
5. Apply S.O.C to check for maxima and minima
F.O.C. for the Lagrange Multiplier
• < = (& ,  ) + =( − *& & −*  ),
• 3& =  ≫≫≫ & (& ,  ) − = *& = 
• 3 =  ≫≫≫ & (& ,  ) − = * = 
• 3= =  ≫  = *& & + * 
• )& = = *&
• ) = = *
•  = *& & + * 
• SOLUTION
• & ∗ = & ∗ ()
•  ∗ =  ∗ ()
• =∗ = =∗ ()
Lagrange Approach Example
• A firm faces the production function ? = 12B C.D EC.D and can buy the inputs
K and L at prices per unit of Birr 40 and 5 respectively.
• If it has a budget of Birr 800 what combination of K and L should it use in
order to produce the maximum possible output.
• Solution
• 4 = -2 + /3 ≫≫≫ > 4 − -2 − /3
• 3 = &2.- 3.- + > 4 − -2 − /3
• F.O.C
,3
• 33 = = -. 42.- 3;." − /> = 
,3
,3
• 32 = = -. 42;." 3.- − -> = 
,2
,3
• 3> = = 4 − -2 − /3 = 
,>
• ≫≫≫ 2∗ = &, 3∗ = 4
S.O.C. Conditions for a Constrained Optimization Problem
• Given
•  = (, ) subject to '(, )
•< = , + ='(, )
• From the FOC
• 3F =  ≫≫≫  , − =, ' = 
• 3G =  ≫≫≫ , − =,, ' = 
• 3= =  ≫≫≫ ' , =
• The S.O.C helps us to know if it is a maxima or minima as
usual.
• The sign of second order total differential (+ 7), evaluated at
the stationary point, is essential.
The SOCs: Necessary vs Sufficient Conditions
• GIVEN : z=f(x,y) is the objective function and g(x,y)=c is the constraint
• +' = ' + + ' + = 

• SOC Necessary conditions


• For maximization of Z:
• Second order total differential (+ 7) negative semidefinite, subject to dg=0
• For Minimization of Z:
• Second order total differential (+ 7) positive semidefinite, subject to dg=0
• SOC Sufficient Conditions:
• For maximization of Z:
• Second order total differential (+ 7) negative definite, subject to dg=0
• For Minimization of Z:
• Second order total differential (+ 7) positive definite, subject to dg=0
SOC
• SOC
• 3>> = , 3> = ' , + 3> = '
• 3 =  + > '
• 3  =  + > '
•3 = +>'
• In matrix form:
3>> 3> 3>  ' '
• 3> 3 3 = ' 3 3
3 > 3  3 ' 3  3
S.O.C the Bordered Hessian
• This matrix is called Bordered Hessian and its determinant is denoted
by H
• The bordered Hessian H is simply the plain Hessian bordered by the first
order derivatives of the constraint with zero on the principal diagonal.

 ' '  / -
• H = ' 3 3  = / −&. - . / >  ≫ 
H
' 3  3 - . / −. -&4
• For the above example, we have
• 3>> = , '3 = /, + '2 = -
• 333 = −. 442.- 3;&." = −&. -, by substituting optimal L and K.
• 322 = −. 443.- 2;&." = −. -&", , by substituting optimal L and K.
• 332 = 323 = &. 03;." 2;." = . -4, , by substituting optimal L and K.
The Bordered Hessian: Sufficient conditions for extrema
• For the sign definiteness of + 7 where J7 =  + + +K.
• For the values of dx and dy that satisfy the constraint,
• + 7  * L +  , MNOPQ(R RS JT = ,
 ' '
 = '
• UVV H 3 3 < , . . .
' 3  3
• Determinants are always negative
• + 7  'L +  , MNOPQ(R RS JT = ,
 ' '
 = '
• UVV H 3 3 > , . . .
' 3  3

• Determinants alternate in sign with X  > 
• Where Y + Z 6 [    [  6
SOC n-case Summary
Conditions Maximization Minimization
F.O.C. 3F =  3F = 
3G =  3G = 
3= =  3= = 
S.O.C. 
X  > 
X  <

X  < 
X  <

X - > 
X - <

(−&)\ X >
\ e.t.c
• Determinants are always negative …..Minimization
• Determinants alternate in sign …..Maximization
More Examples on constrained optimization
• Find the extreme values of the following functions and
identify whether they are minimum or maximum using
bordered Hessian
1. t = FG    + ="

2. t =  
   + - =2
Lagrange Multiplier Interpretation
• Now since the optimal value of L depends on & ∗ ,  ∗ , + =∗ . We
may consider L* to be a function of m alone.
• 3∗ = (& ∗ ,  ∗ ) + =∗  − '(& ∗ ,  ∗ )
• Differentiating L* with respect to m, we have;
+3∗ +& ∗ + ∗ ∗ ∗ +>∗ +& ∗ + ∗
• = & + +  + + − '(& ,  ) + >∗ & − '& − '
+ + + +
+3∗
• = =∗ , u[ 6 666
+
• An interpretation of the Lagrange Multiplier.
• The Lagrange multiplier, λ , measures the effect on the objective function of a
one unit change in the constant of the constraint function (stock of a resource)
HOW to interpret λ

• Note :-
• If λ > ,0 it means that for every one unit increase (decrease)
in the constant of the constraining function, the objective
function will increase (decrease by a value approximately
equal to λ .
• If λ < ,0 a unit increase (decrease) in the constant of the
constraint will lead to an decrease (increase ) in the value of
the objective function by approximately equal to λ .
S.O.C the Bordered Hessian
• The Bordered Hessian is the condition under a constraint optimization
• Assume a quadratic case sign definiteness for simplicity
• v =  + [L + L
• Subject to Y + ZL = 
• The constraint implies w = −(Y⁄Z)N, we can re-write q as a function
of one variable only.
• v =  + [[− Y⁄Z N] + [− Y⁄Z N]

  
• v = [Z − [YZ + ZY ] .
Z
• The sign of q depends on the sign of the expression in the bracket.
S.O.C the Bordered Hessian
• Assume a symmetric matrix determinant given as
 Y Z
• Y  [ = [YZ − Z − Y
Z [ 
 The new determinant is an exact opposite of the above one…Z − [YZ + ZY
• Therefore,
 Y Z
• v  * L +    Y + ZL = , UVV Y  [ < 
Z [ 
 Y Z
• v  'L +    Y + ZL = , UVV Y  [ > 
Z [ 
• Where z {|} ~ {€ ‚€ ƒ„€ †ƒ†€|‡ „ ‚€ ƒ„|‡†{|.
4.3 Inequality constraints and the theorem of Kuhn-
Tucker or Karush Kuhn Tucker(KKT)
• Often called Kuhn-Tucker(KKT) theorem
• Sometimes called the Karush-Kuhn-Tucker(KKT) theorem
• Optimization with inequality constraints.
• Developed by
William Karush (1939-1997), Maths, Chicagio
Harold W. Kuhn(1925-2014), Maths, Newyork
Albert William Tucker,(1905-1995), Maths,
Canada
KKT … Single variable case
1. Single variable case :
•  ˆ = & ,   & ≥ , non-negativity constraint
• We have three possible situations on the restrictions:
• ! & =  + & >  (point A)
• ! & =  + & =  (point B)
• ! & <  + & =  (point C and D)
• ‰L6::
• These three conditions can be consolidated in to:
• !  Š  , & ≥  and & ∗ ! & = 
KKT … Single variable case
• & ∗ !  = , is referred to as complementary slackness between & &
!  .
&

Conclusion
! & ≥  & >  & ∗ !  = Minimization
! & Š  & >  & ∗ !  = Maximization

• The above condition can easily be generalized for n-variable case:


• !  Š  , >  and  ∗ !  = , for j=1,2,…,n
KKT Condition: Two variable, one constraint
case
•  ˆ = & ,  ,
•  : '(& ,  ) Š  and & ,  , ≥ ,
non-negativity constraint
• For Maximization, we add slack variables to make the constraint binding (active)
• For minimization, we subtract surplus values to make the constant binding

• If at optimality the slack/surplus of a


constraint is zero, then the constraint
is said to be active or binding.
Otherwise, it is inactive.
KKT: Maximization Conditions
KKT …Maximization
•  ˆ = & ,  ,
• Associate a constant Lagrange multiplier λ with the constraint '(, ) Š ,
• After forming your composite function: The conditions:
• Stationary point
• E‹ Š 0, E Š 0
• 3= ≥ , = ≥ , Lagrange multiplier must be positive.
• Complementary Slackness Conditions.
• = ∗ EŽ = 0
•  ∗ E = 0
•  is called shadow price.
• Require (x, y) to satisfy the constraint
• T & ,  Š (
KKT …Minimization
•  ˆ = & ,  ,
• Associate a constant Lagrange multiplier λ with the constraint '(, ) Š
,
• After forming your composite function: The conditions:
• Stationary point
• E‹ ≥ 0, E ≥ 0
• 3= Š , = ≥ , Lagrange multiplier must be positive.
• Complementary Slackness Conditions.
• = ∗ EŽ = 0
•  ∗ E = 0
•  is called shadow price.
• Require (x, y) to satisfy the constraint
• T & ,  Š (
KKT condition: Example 1
• Given
• Objective function : ) = 
• Constraints:
• F + G Š /
• F + G Š "
• Form Lagrangian function first
• 3 =  + >& / − F − G +> " − F − G
KKT condition: the steps
• After forming the Lagrangian function
• Step 1:
• Assume condition 2 is satisfied ( = 0), CONSTRIANT 2 IS NOT BINDING
• Solve of the Lagrange function and test if  = 0.
 If yes, end the work here and if not go to step 2
• Step 2
• Assume condition 1 is satisfied (‘ = 0), CONSTRIANT 1 IS NOT BINDING
• Solve of the Lagrange function and test if ‘ = 0.
 If yes, end the work here and if not go to step 3
• Step 3
• Assume both CONSTRIANTS are NOT BINDING
• Solve of the Lagrange function and test if ‘ = 0,  = 0, {|} ’ = 0.
KKT condition: Example 1
• Step 1: assume (> = ), CONSTRIANT 2 IS NOT BINDING
• 3& =  + >& / − F − G
3F = − >& =  ≫≫≫≫≫≫ = >&
3G = F − >& =  ≫≫≫≫≫≫  = >& ≫≫≫ =
3>& = / − F − “ = 
≫≫≫≫  +  = / ≫≫  = / ≫≫  = / =
• Then check if the constraint is binding or true
• F + G Š "
• ≫≫≫ / ∗  + / Š " .  ≫≫ / Š " ”‰• •–)—.
• >   +
KKT condition: Example 1
• Step 2: assume (>& = ), CONSTRIANT I IS NOT BINDING
• 3 =  + > " − F − G
3F = − > =  ≫≫≫≫≫≫ = >
3G = F − > =  ≫≫≫≫≫≫  = > ≫≫≫ = 
3>& = " − F − “ = 
≫≫≫≫  + = " ≫≫  +  = "
≫≫ - = " ≫  = &/ & = 
• Then check if the constraint is binding or if >&   +.
• F + G Š /
• ≫≫≫ &/ +  Š " ≫≫ -/ Š " •–)— & >&   +.
• Solution:
• F = &/, G = , >& = , > = &/, + ) = FG = -/.
Finding a binding constraint
• Shortcut to find binding constraint.
• Look at the weights of the variables in the main function.
• The constraint with similar weight for the variables will be binding.
• Example
•  =  +
• Constriant1:  + Š / + constraint II: . / +  Š "
 + Š /  +'.
•  =
• Constriant1:  + Š / + constraint II:  + Š "
 + Š /  +'.
KKT condition: Example 2
• ˜™š (, ) = ./ ./

• › 
  +   Š 4 +
š +  Š /"
• •[ 3'6' 
• < = ./ ./
+ >& 4 −  −  
+ > (/" −  −  )
• ANS
• >& = , > = . /, 3 = 4, F = &-, + G = &-
Additional Example
KKT: The generalized Case

You might also like