Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

August 30, 2023

Linear Programming
1. The Linear Programming Problem
Definition 1.1: Linear function
A function z of variables x1 , x2 , . . . , xn of the form z = c1 x1 + c2 x2 + · · · + cn xn where c1 , c2 , . . . , cn
are constants is called a linear function of the variables x1 , x2 , . . . , xn .
Definition 1.2: Linear Inequalities
Inequalities of the form c1 x1 + c2 x2 + · · · + cn xn ≤ b or c1 x1 + c2 x2 + · · · + cn xn ≥ b (and also the
corresponding strict inequalities) where c1 , c2 , . . . , cn , b are constants and x1 , x2 , . . . , xn are
variables are called linear inequalities.
Definition 1.3: Linear Equation
An equation of the form c1 x1 + c2 x2 + · · · + cn xn = b, where c1 , c2 , . . . , cn , b are constants and
x1 , x2 , . . . , xn are variables is called a linear equation.
Definition 1.4: Linear Programming Problem
A linear programming problem or LPP is the problem of maximizing (or minimizing) a linear
function. This linear function is called the objective function of the LPP. The dependent variables
appearing in the objective function are called the decision variables. The values of the decision
variables in a LPP must satisfy a set of constraints where each constraint is a linear equation or a
linear inequality. Also, a sign restriction is associated with each decision variable. Each decision
variable in a LPP must be either non-negative or unrestricted in sign (urs) i.e. it can assume
positive, negative or zero values.
A general linear programming problem (or LPP ) (in which all decision variables are
non-negative) can be stated as follows:
Find values of the decision variables x1 , x2 , x3 , . . . , xn which optimize (i.e. either maximize or
minimize) the linear objective function
z = c1 x1 + c2 x2 + c3 x3 + · · · + cn xn (1)
subject to the constraints:
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn (≤, =, ≥) b1
a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn (≤, =, ≥) b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn (≤, =, ≥) b3 (2)
.. ..
. .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn (≤, =, ≥) bm
and the non-negativity restrictions
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, . . . , xn ≥ 0 (3)

Where a11 , a12 , . . . , amn , b1 , b2 , . . . , bm , c1 , c2 , c3 , . . . cm are real numbers and in each constraint in (2)
exactly one of the relations ≤, =, ≥ hold.
An LPP can involve any finite number of variables and any finite number of constraints.
Definition 1.5: Solution of a Linear Programming Problem
A solution of the LPP given in Definition 1.4 is a sequence of n numbers k1 , k2 , k3 , . . . , kn such
that x1 = k1 , x2 = k2 , x3 = k3 , . . . , xn = kn satisfy each of the constraints in (2).
Any solution to the LPP which also satisfies the sign restrictions in (3) is called a feasible solution
to the LPP.

1
For a maximization problem, a feasible solution which maximizes the objective function of a LPP
is called an optimal feasible solution to the LPP. Similarly, for a minimization problem, a feasible
solution which minimizes the objective function of a LPP is called an optimal feasible solution to
the LPP.
Note: Some LPP’s have no optimal feasible solution. When an optimal feasible solution to a LPP
exists, it may or may not be unique.
2. Formulation of a LPP
Guidelines on formulating a problem as an LPP:
Step 1: Identify the decision variables.
Step 2: Identify the objective function and express it as a linear function of the decision variables.
Step 3: Determine whether the objective function is to be maximized or minimized.
Step 4: Formulate the constraints as linear equations or inequalities.
Step 5: Determine the sign restrictions associated with each decision variable.
3. Graphical method of solving an LPP (involving two variables)
A LPP with only two decision variables can be solved graphically. We label the two variables x1
and x2 and define two rectangular Cartesian coordinate axes viz. the x1 axis and x2 axis. By a
point in the x1 − x2 plane we mean an ordered pair (x1 , x2 ) of values of the decision variables x1
and x2 . By the feasible region of an LPP we mean the set of all points in the x1 − x2 plane that
satisfy all the constraints of the LPP and the sign restrictions associated with the decision
variables. Next we graph the feasible region.
To graph the feasible region, graph each constraint and the non-negativity restrictions (assuming
all decision variables are non-negative). The non-negativity restrictions are satisfied only by
points in the first quadrant. To graph a linear inequality constraint of the form ax1 + bx2 ≤ c or
ax1 + bx2 ≥ c, where a, b, c are constants, note that the set of points in the x1 − x2 plane that
satisfies a linear inequality, say ax1 + bx2 ≤ c includes (i) all points on the straight line L having
equation ax1 + bx2 = c and (ii) all points on exactly one side of this line L. In order to determine
the side of L for which the inequality ax1 + bx2 ≤ c is satisfied, choose any point P:(m,n) not lying
on L. If P:(m,n) satisfies the inequality ax1 + bx2 < c (i.e. if am + bn < c) then all points on the
same side of L as P will also satisfy this inequality and all points on the side of L that does not
contain P will satisfy the inequality ax1 + bx2 > c.
The feasible region consists of all points that satisfies all constraints and the non-negativity
restrictions.
After the feasible region has been graphed we can find the optimal solution in one of two possible
ways viz. the extreme point method and the isoprofit/isocost line method.
An optimal solution of a LPP which is of maximization (minimization) type, if it exists, is a point
in the feasible region for which the value of the objective function is the largest (smallest).
Definition 3.1: Convex Set
A set S of points in the x1 − x2 plane is called a convex set if the line segment joining any pair of
points in S is entirely contained in S. For example, the set of points lying on and within a circle or
a rectangle or a triangle is a convex set, whereas the set of points lying on and within an annulus
is a non-convex set.
Theorem 3.1:
The feasible region of an LPP (if nonempty) is a convex set.

2
Definition 3.2: Extreme Point of a Convex Set
A point P in a convex set S, is called an extreme point of S, if it does not lie on the line segment
joining two other distinct points of S, that are different from P. In other words, every line segment
that lies entirely in S and contains P, must have P as one end-point. For example, the vertices of a
rectangle is an extreme point of the rectangle and each point on the circumference of a circle is an
extreme point of the circle.
Theorem 3.2:
The feasible region of a LPP has only a finite number of extreme points.
Theorem 3.3:
If an LPP that has an optimal solution then there is an extreme point of the feasible region at
which the objective function assumes the optimal value.
Note: This result is very important because it reduces the set of points that yeild an optimal
solution from the entire feasible region (which generally contains an infinite number of points) to
the set of extreme points (a finite set).
Finding the optimal solution by the Extreme Point Method:
Suppose an LPP has an optimal solution. To find the optimal solution by the extreme point
method perform the following steps:

Step 1: Graph the feasible region.


Step 2: Find the coordinates of each extreme point of the feasible region and calculate the value of
the objective function at that point.
Step 3: Determine the extreme point at which the objective function attains its optimal value. The
coordinates of this point is an optimal solution of the LPP.

Definition 3.3: Isoprofit/Isocost line


Consider a LPP involving two decision variables x1 , x2 whose objective function is given by
z = c1 x1 + c2 x2 . In a maximization (minimization) type LPP an isoprofit (isocost) line is any
straight line that intersects the feasible region and has equation of the form c1 x1 + c2 x2 = k, where
k is a constant.
Note that for all points (x1 , x2 ) on any isoprofit (isocost) line having equation of the form
c1 x1 + c2 x2 = k, (where k is a constant), the value z of the objective function is same, namely k.
To draw an isoprofit/isocost line choose a suitable point P:(m,n) in the feasible region and find
the value of the objective function, say k, corresponding to this point, (where k = c1 m + c2 n),
then the straight line having equation c1 x1 + c2 x2 = k is an isoprofit/isocost line.
Finding the optimal solution by Isoprofit/Isocost line method:
To find the optimal solution of a maximization (minimization) problem by the isoprofit (isocost)
line method, draw a isoprofit (isocost) line and translate the line parallel to itself in the direction
in which z increases (decreases). Then, if after a point the isoprofit (isocost) line leaves the
feasible region then the last point in the feasible region that contacts an isoprofit (isocost) line is
an optimal solution of the LPP. On the other hand if an isoprofit (isocost) line never leaves the
feasible region, no matter how far it is translated, then we say that the LPP is unbounded. In this
case an optimal solution does not exist.
4. Nature of Solutions of a LPP. The four possible cases:
If any LPP with two decision variables is solved one of the following cases will occur.
(a) The LPP has unique solution.
(b) The LPP has multiple optimal solutions, i.e. there exists multiple feasible solutions with the
same optimal value. In this case we say the LPP has multiple or alternative optimal

3
solutions. Any one of the optimal feasible solutions is to be determined in this situation and
there is no preference between the optimal solutions unless stipulated in the problem.
Theorem 4.1:
If an LPP has two distinct optimal solutions it has an infinite number of optimal solutions.
Note: Graphically, this case would occur if an isoprofit/ isocost line, when it is on the point
of leaving the feasible region, coincides with an entire line segment of the feasible region.
(c) The LPP is infeasible i.e. it has no feasible solutions and thus no optimal feasible solution.
Note: Graphically, this case would occur if the feasible region is empty.
(d) The LPP is unbounded. This means that for a maximization (minimization) problem, the
z-value (i.e. value of the objective function corresponding to feasible solutions of the LPP)
can be made arbitrarily large (small). In this case the LPP has no optimal feasible solution.
Note: Graphically, this case would occur if an isoprofit (isocost) line never leaves the LPP’s
feasible region no matter how far it is translated in the direction of z increasing (decreasing).
Note:
(i) If any LPP, involving any number of decision variables is solved then also one of the above four
cases would occur.
(ii) Extreme Point Method vs Isoprofit/Isocost Line Method:
The feasible region of a LPP may be bounded or may be unbounded. If feasible region is bounded
then it is a convex polygon. In this case an optimal feasible solution to this LPP exists and is
attained at an extreme point of this convex polygon and both the graphical methods of solution of
an LPP may be used to find the optimal solution. However in the extreme point method we need
to find the cordinates of each of the extreme points and the value of the objective function at that
point whereas in the Isoprofit/Isocost Line Method we need to find the cordinates of only the
extreme point in which the optimal solution is attained and the value of the objective function
only at that point. The Isoprofit/Isocost Line Method therefore requires less computations. On
the other hand, if the feasible region is unbounded i.e. it is a convex region which is unbounded in
some direction, then an optimal feasible solution to the LPP may not exist, as the LPP may be
unbounded. In this case the Isoprofit/Isocost line method should be used, because if the LPP is
unbounded then an optimal feasible solution is not attained at any extreme point of the region of
feasible solutions.

5. The Simplex Method for Solving an LPP


The graphical method of solving an LPP can be applied if the LPP involves only two decision
variables. The Simplex method on the other hand can be used to solve LPP’s with any number of
decision variables.
Definition 5.1: Slack and surplus variables
Consider a linear constraint of the form a11 x1 + a12 x2 + a13 x3 ≤ b1 . This constraint may be
converted into an equation by adding a new nonnegative variable x4 to the left hand side of the
equation, where x4 is defined by x4 = b1 − (a11 x1 + a12 x2 + a13 x3 ) ≥ 0. Thus
a11 x1 + a12 x2 + a13 x3 + x4 = b1 . This new nonnegative variable x4 is called a slack variable.
Consider a linear constraint of the form a11 x1 + a12 x2 + a13 x3 ≥ b1 . This constraint may be
converted into an equation by subtracting a new nonnegative variable x4 from the left hand side
of the equation, where x4 is defined by x4 = (a11 x1 + a12 x2 + a13 x3 ) − b1 ≥ 0. Thus
a11 x1 + a12 x2 + a13 x3 − x4 = b1 . This new nonnegative variable x4 is called a surplus variable.
The first step towards solving a LPP using the simplex method is to convert the LPP into
standard form.

4
Definition 5.2: Standard form of an LPP
An LPP is said to be in standard form if it has the following characteristics:
1. All constraints are in the form of equations.
The constraints that are in the form of ≤ type inequalities are converted into equations by introducing
slack variables whereas the constraints that are in the form of ≥ type inequalities are converted into
equations by introducing surplus variables.
2. The right hand side constants of each constraint is non-negative.
If in a constraint in the form of an inequality, the right hand side constant is negative, the constraint is
first converted into an equation by introducing a slack or surplus variable, then both sides of the equation
are multiplied by -1, to make the right hand side constant positive.
3. All variables are non-negative.
If a variable xi is unrestricted in sign then we replace xi by the difference of two non-negative variables,
i.e. xi = x′i − x′′i where x′i ≥ 0 and x′′i ≥ 0.
( (

xi if xi ≥ 0 ′′
0 if xi ≥ 0
Here xi = and xi =
0 if xi < 0 −xi if xi < 0
4. The objective function is of maximization or minimization type.
Suppose that we have converted an LPP with m constraints into standard form and the standard form
involves n variables x1 , x2 , . . . , xn . Suppose the standard form is as follows:
Maximize (or minimize)
z = c1 x1 + c2 x2 + c3 x3 + · · · + cn xn (4)
subject to the constraints:
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3 (5)
.. ..
. .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm
and the non-negativity restrictions

xi ≥ 0 (i = 1, 2, . . . , n), bi ≥ 0 (i = 1, 2, . . . , m). (6)

If we define
···
     
a11 a12 a1n x1 b1
 a21 a22 ··· a2n   x2   b2 
A= , X =  . , B = 
     
.. ..
 ..

 .    . 
am1 am2 ··· amn xn bm
Then the constraints in (5) can be represented by the matrix equation AX = B.
Definition 5.3: Basic solution to a system of linear equations
Consider a system AX = B of m linear equations in n variables where n ≥ m. If we set any (n − m)
variables equal to 0, then the original system reduces to a system of m equations in m unknowns. This
reduced system of equations may or may not have unique solution. If the reduced system has a unique
solution, then this unique solution, together with the variables set equal to 0, form a solution of the
original system AX=B, called a basic solution to AX=B. On the other hand if the reduced system does
not have a unique solution then no corresponding basic solution is obtained. In a basic solution, the
(n − m) variables set equal to zero are called the non-basic variables and the remaining m variables are
called the basic variables. A basic feasible solution to AX=B is a basic solution in which the values of all
the basic variables are non-negative.
Theorem 5.1:
The maximum number of basic solutions to a linear system of m equations in n variables (n ≥ m) is n Cm .

5
Theorem 5.3:
Every basic feasible solution of an LPP corresponds to the cordinates of an extreme point of the feasible
region and the cordinates of every extreme point of the feasible region is a basic feasible solution to the
LPP.
Theorem 5.3:
If an LPP has an optimal solution then there is a basic feasible solution that is optimal. unbounded i.e.
no finite optimal value of the objective function exists. Stop. In fact, if any ∆j < 0 (not necessarily the
most negative ∆j ), and all entries in the corresponding column headed by Xj are either negative or zero,
it indicates that the LPP is unbounded. Thus for example in Table 1, if ∆3 < 0 and a13 ≤ 0 and a23 ≤ 0,
then the LPP is unbounded.
In a minimization type LPP (being solved by Method I, described in Step 4), if all entries in the column
headed by Xr , in which the most positive

6. The Simplex Algorithm:


The simplex algorithm starts by finding an initial basic feasible solution to the given LPP and then
successively locates other basic feasible solutions yielding better values of the objective function until the
optimal solution is obtained or an indication is obtained that the LPP is unbounded. The simplex method
involves the following steps:
Step 1: Convert the LPP into standard form.
Step 2: Find a bfs (basic feasible solution) from the standard form. This is called the initial basic feasible
solution. If all the constaints in the original LPP are ≤ type inequalities, then the initial bfs is
readily obtained. However if the original LPP involves constraints that are equations or ≥ type
inequalities, then the initial bfs may not be apparent. The technique to apply in these cases called,
The Big-M Method, is discussed later.
Step 3: Determine whether the current bfs is optimal (using the optimality criterion) or if the LPP is
unbounded (using the unboundedness criterion).
Step 4: If the optimality criterion is not satisfied i.e. the current bfs is not optimal and the unboundedness
criterion also is not satisfied, then find an improved bfs i.e. a bfs with an objective function value
which is better or atleast as large as the current value.
Step 5: Repeat Step 3 and Step 4 until either the optimality criterion is satisfied or the unboundedness
criterion is satisfied and then stop.

The Simplex Computational Procedure:


Consider a LPP of maximization type involving three decision variables and two constraints which are
inequalities of the ≤ type, with non-negative right hand side constants of the following form:

Maximize z = c1 x1 + c2 x2 + c3 x3 (7)
subject to the constraints:

a11 x1 + a12 x2 + a13 x3 ≤ b1 (8a)


a21 x1 + a22 x2 + a23 x3 ≤ b2 (8b)

and subject to the non-negativity restrictions:

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0 (9)

Here b1 ≥ 0, b2 ≥ 0.
To solve the LPP by simplex method perform the following steps:

Step 1: Express the LPP in Standard Form:


For this we introduce slack variables x4 ≥ 0 and x5 ≥ 0 in constraints 8a and 8b respectively. We
also introduce the slack variables x4 , x5 in the objective function with zero coefficients. The resulting
LPP called the reformulated LPP is as follows:

6
Maximize z = c1 x1 + c2 x2 + c3 x3 + 0x4 + 0x4 (10)
subject to the constraints:

a11 x1 + a12 x2 + a13 x3 + x4 = b1 (11a)


a21 x1 + a22 x2 + a23 x3 + x5 = b2 (11b)

and the non-negativity restrictions:

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0. (12)

Step 2: Construct the Initial Simplex Table:

Table 1: Initial simplex table


cj −→ c1 c2 c3 0 0
Basic Minimum
variables CB XB X1 X2 X3 X4 X5 Ratio
x4 0 b1 a11 a12 a13 1 0
x5 0 b2 a21 a22 a23 0 1
z=0 ∆1 ∆2 ∆3 0 0

The initial simplex table shown above is completed as follows:


i. Write the coefficients of the variables x1 , x2 , . . . , x5 in the objective function appearing in
Equation 10, namely c1 , c2 , c3 , 0, 0, in the first row headed by cj .
ii. Write down the coefficients of each variable xi (where i = 1, 2, . . . , 5) appearing in the constraint
equations 11a and 11b, in the column headed by Xi . Thus we write a11 , a21 in the column
headed by X1 , etc.
iii. We now determine the ! basic variables. The variable xi associated with the column in which the
1
unit vector e1 = appears, namely x4 , is chosen as the first basic variable. The variable
0
!
0
associated with the column in which the unit vector e2 = appears, namely x5 , is chosen as
1
the second basic variable.
Note: If in a LPP there are three constraint
  equations then the variable associated with the
1
column in which the unit vector e1 = 0 appears, is chosen as the first basic variable, and the
 

0
 
0
variable associated with the column in which the unit vector e2 = 1 appears, is chosen as
 

0
the second
  basic variable and the variable associated with the column in which the unit vector
0
e3 = 0 appears, is chosen as the third basic variable.
 

1
iv. Next we enter the coefficients of the basic variables x4 and x5 appearing in the objective
function in Equation 10, namely 0 and 0, in the column headed by CB .
v. We next enter the right hand side constants appearing in the constraint equations 11a and 11b,
in the column headed by XB . Thus we write b1 , b2 in the column headed by XB .

7
Step 3: Completing The Index Row in a Simplex Table:
The last row in a simplex table is called the index row. In Table 1 the fifth row is the index row.
i. Computation of the z-value in the index row:
The value of the objective function (z-value), for the basic feasible solution corresponding to the
current simplex table, is displayed in the index row in the columns jointly headed by CB and
XB , is the sum of the products of corresponding elements of the CB and XB columns.
ii. Computation of the net-evaluations (∆j ’s):
The element in the index row, in the column headed by Xj (where j=1,2,. . . ,5), will be denoted
by ∆j , and is called the net-evaluation corresponding to Xj and is defined by:
∆j = (Sum of the products of corresponding elements of the Xj and CB columns) − cj
Note that if the variable xr is a basic variable, then the net-evaluation ∆r is always zero.
iii. Finding the Basic Feasible Solution Corresponding to a Simplex Table:
In the basic feasible solution corresponding to a simplex table, the basic variables appear in the
“Basic Variables” column and the remaining variables are nonbasic variables. Thus in Table 1,
x4 and x5 are the basic variables whereas x1 , x2 , x3 are nonbasic variables. Values of the basic
variables, in the basic feasible solution corresponding to the current simplex table are obtained
from the XB column and all non-basic variables have value 0 in the solution. Thus the bfs
corresponding to Table 1 is x1 = 0, x2 = 0, x3 = 0, x4 = b1 , x5 = b2 . (Note that x1 , x2 , x3 are
non-basic variables in Table 1 and hence have value 0 in the corresponding bfs. x4 is a basic
variable. The value of x4 is b1 , which is the value in the XB column in the row headed by x4 .
Similarly the value of the basic variable x5 is b2 ).
Step 4: Optimality Test:
If the LPP is of maximization type then the optimality criterion is as follows:
i. If all ∆j ≥ 0 then the current solution is optimal. Terminate the simplex algorithm. The
optimal value of the objective function is the value of z in the current table.
ii. If atleast one ∆j < 0 the current solution is not optimal. Identify the most negative ∆j , say ∆r .
Ties, if present, may be broken arbitrarily. The column corresponding to ∆r is called the pivot
column.
An LPP of minimization type may be solved using simplex method in one of two ways.
Method I: The optimality criterion is slightly modified for minimization type problems as
follows:
If all ∆j ≤ 0 then the current solution is optimal. Terminate the simplex algorithm. The
optimal value of the objective function is the value of z in the current table. Otherwise, if
atleast one ∆j > 0 the current solution is not optimal. Identify the most positive ∆j , say ∆r .
Ties, if present, may be broken arbitrarily. The column corresponding to ∆r is called the pivot
column. All other steps of the simplex method remain unaltered.
Method II: The minimization problem is converted into an equivalent maximization type
problem:
For this we apply the rule that M inimum z = −M aximum (−z).
Suppose that in a LPP the objective function is Minimize z = 2x1 − 3x2 . In this method, we
solve the equivalent problem in which the objective function is Maximize z ′ = −(2x1 − 3x2 )
where z ′ = −z. This change does not affect the optimal solution of the problem. Thus, if the
optimal solution of this maximization problem is x1 = 3, x2 = 1 and Max z ′ = −3, then the
optimal solution of the original minimization problem is x1 = 3, x2 = 1 and
M in z = −M ax z ′ = −(−3) = 3.

8
Step 5: Unboundedness Criterion:
Consider a maximization type LPP. If all entries in the column headed by Xr , in which the
most negative net-evaluation ∆r occurs, are either negative or zero, the LPP is net-evaluation
∆r occurs, are either negative or zero, the LPP is unbounded. Also, if any ∆j > 0 and all entries
in the corresponding column headed by Xj are either negative or zero, it indicates that the LPP
is unbounded.
Step 6: Finding Improved Basic Feasible Solution:
If neither the optimality criterion nor the unboundedness criterion are satisfied in the current
simplex table, we proceed to find an improved basic feasible solution. This is done by altering
the set of basic variables for the current simplex table, by removing exactly one judiciously
selected basic variable (called the outgoing variable) and replacing it by some appropriately
selected currently non-basic variable (called the incoming variable). The incoming and outgoing
variables are determined as follows:
Incoming variable: The variable xr appearing in the pivot column i.e. in the column in which
the net-evaluation ∆r (defined in Step 4) occurs is the incoming variable.
Outgoing variable: For each positive element, say p, in the pivot column determine the ratio of
the corresponding element in the XB column (divided) by p. This ratio is entered in the
minimum ratio column, in the row in which the element p occurs. Note that this ratio is not
computed for negative or zero elements in the pivot column. After this ratio has been computed
for all the positive elements in the pivot column, identify the smallest number in the minimum
ratio column. Ties, if present, may be broken arbitrarily. Suppose the smallest entry in the
minimum ratio column occurs in row k. Then row k is called the pivot row. The basic variable
appearing in the pivot row is the outgoing variable. The element at the intersection of the pivot
row and pivot column, is called the pivot element.
By transforming the current simplex table (say Table n-1) we form the next simplex table which will
yield an improved basic feasible solution (Table n) as follows:
We call the current simplex table (Table n-1).
i. Enter the new set of basic variables in the column of “Basic Variables” in the new simplex table
(Table n). Replace the outgoing basic variable by the incoming non-basic variable and keep all
other basic variables unchanged.
ii. Enter the coefficients of these new basic variables in the objective function, in the column CB of
Table n.
iii. Divide the pivot row in Table n-1 by its pivot element and enter the resulting numbers in
corresponding positions of Table n. This operation is called updating the pivot row in Table n-1.
iv. For each non-pivot row in Table n-1 (including the index row) say row i, subtract a appropriate
multiple of the pivot row in Table n-1 from row i, so that the element in the pivot column of
row i is reduced to zero. Write the resulting modified elements of row i in corresponding
positions of Table n. This operation is called updating the non-pivot row i in Table n-1. The
index row of the new table may also be formed as in Step 3. This step and the previous step
constitute a process called pivoting.
v. Compute the value of z and ∆j as in Step 3(i) and 3(ii) and repeat from step 3 until the
optimality criterion or the unboundedness criterion is satisfied.
Step 7 State the optimal solution and the optimal value of the objective function or give reasons why the
optimal solution does not exist .

9
Note:
(i) LPP’s with alternative optimal solutions:
If in the optimal simplex table there is a non-basic variable whose net-evaluation is zero then, in general,
it indicates the LPP has alternative optimal solutions.
(ii) Degeneracy:
A LPP is said to be degenerate if it has atleast one basic feasible solution in which one or more basic
variables have zero value. On the other hand if in every basic feasible solution of an LPP all basic
variables have positive value then the LPP is called non-degenerate.
If simplex method is applied to solve a maximization type (minimization type) non-degenerate LPP then
in every iteration the improved basic feasible solution will yield a value of z strictly larger (smaller) than
the previously obtained value and it is impossible to encounter the same basic feasible solution twice in
the solution. However if the LPP is degenerate, then it is possible to encounter the same basic feasible
solution twice in the simplex solution. This occurance is called cycling. If cycling occurs then the simplex
method will loop or cycle forever among a set of basic feasible solutions, each producing the same value of
the objective function and never get to the optimum solution. Degeneracy per se does not present any
problem in the solution of an LPP it is only when cycling occurs that a serious problem arises.
Fortunately cycling rarely occurs in practice although many real life linear programming problems are
degenerate. The simplex method can be modified to ensure that cycling never occurs but we will not
discuss this modification.
Summary of Simplex Algorithm:

Step 1: Convert the LPP to standard form. (See definition in page 5 and example in page 6). Proceed to
Step 2.
Step 2: Complete the initial simplex table as follows:
i. Enter the coefficients of the variables x1 , x2 , . . . in the objective function, in the first row headed
by cj .
ii. Enter the coefficients of the variables x1 , x2 , . . . in the constraint equations, in the columns
headed by X1 , X2 , . . . respectively.
iii. Complete the “Basic Variables” column. The variable xi appearing in the column in which the
unit vector e1 occurs is chosen as the first basic vector, the variable appearing in the column in
which the unit vector e2 occurs is chosen as the second basic vector etc.
iv. Complete the CB column. (See Step 2 part iv in page 7).
v. Complete the XB column. (See Step 2 part v in page 7). Proceed to Step 3.
Step 3: Complete the index row in the simplex table as follows:
i. Find the z-value (i.e. the value of the objective function) for the basic feasible solution
corresponding to the current simplex table. (See Step 3 part i in page 8).
ii. Find the net-evaluations for the variables x1 , x2 , . . .. (See Step 3 part ii in page 8). Proceed to
Step 4.
Step 4: Test if the optimality criterion is satisfied for the current simplex table. (See Step 4 in page 8). If
satisfied write the optimal basic feasible solution ( See Step 3 part iii in page 8) and the optimal value
of z. Terminate the simplex algorithm. If the optimality criterion is not satisfied proceed to Step 5.
Step 5: Test if the unboundedness criterion is satisfied for the current simplex table. (See Step 5 in page 9).
If satisfied write that the given LPP is unbounded and hence has no optimal basic feasible solution.
Terminate the simplex algorithm. If the unboundedness criterion is not satisfied proceed to Step 6.
Step 6: Identify the pivot column (mark with an uparrow) and the incoming variable. (See Step 4 in page
9). Proceed to Step 7.
Step 7: Complete the minimum ratio column, identify the pivot row (mark with an leftarrow), pivot element
(encircle it) and the outgoing variable. (See Step 6 in page 9). Proceed to Step 8.
Step 8: Form the next simplex table as follows:
i. Complete the “Basic Variables” column in the new simplex table by replacing the outgoing basic
variable by the incoming non-basic variable and keeping all other basic variables unchanged.

10
ii. Complete the CB column. (See Step 2 part iv in page 7).
iii. Update the pivot row of the last complete simplex table and write the resulting elements in
corresponding positions in the new table. (See Step 6 iii in page 9).
iv. Update the non-pivot rows of the last complete simplex table (including the index row) and
write the resulting elements in corresponding positions in the new table. (See Step 6 iv in page
9). The index row may also be completed as in Step 3.
(a) Repeat from Step 4.

7. Charnes Big M method


If all the constraints of an LPP are ≤ type inequalities and the right hand sides of each constraint is
non-negative then we readily obtain an initial basic feasible solution using slack variables as the basic
variables. However if the LPP has ≥ or = type constraints then the initial basic feasible solution may not
be available, because one or more unit vectors may be missing. In cases such as this we use the Big M
method to solve the problem.
The Big M method involves the following steps:

(a) Convert the LPP into standard form.


(b) If one or more unit vectors are missing from the initial simplex table introduce a minimum number
of additional non-negative variables in appropriate constraints so that the missing unit vectors are
obtained. These variables are called the artificial variables, since they have no meaning in the
problem and were merely introduced to obtain the initial basic feasible solution.
(c) Let M denote a very large positive number. If the LPP is of maximization type then add each
artificial variable introduced to the objective function with coefficient (-M) and if the LPP is of
minimization type then add each artificial variable introduced to the objective function with
coefficient M.
(d) Apply the simplex method to solve the resulting LPP. If an artificial variable is removed from the set
of basic variables all calculations in the column corresponding to that artificial variable are omitted.
Suppose the LPP does not have unbounded solution and the optimality criterion is satisfied then the
following three cases can arise:
i. None of the basic variables in the final simplex table are artificial variables. In this case the
current solution is the optimal basic feasible solution of the original LPP.
ii. One or more basic variables in the final table are artificial variables and at least one artificial
variable has a positive value in the solution. In this case the LPP does not have any feasible
solution.
iii. One or more basic variables in the final table are artificial variables and all artificial variables
have zero value in the final solution. In this case the current solution is the optimal basic
feasible solution of the original LPP but the optimal basic feasible solution is degenerate.

8. Duality
With every linear programming problem we can associate another linear programming problem, called its
dual, such that by solving the original LPP by simplex method, we can automatically obtain the solution
of the dual problem and by solving the dual LPP by simplex method, we can automatically obtain the
solution of the original problem. Thus whenever a LPP is solved by simplex method we actually get the
solutions of two LPPs viz. the given LPP and its dual. When considering the dual of a LPP, we refer to
the original LPP as the primal.

11
Definition 8.1: Normal Maximization Problem and Normal Minimization Problem
A normal maximization problem is a problem of the form:
Maximize:
z = c1 x1 + c2 x2 + c3 x3 + · · · + cn xn
subject to the constraints:

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn ≤ b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn ≤ b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn ≤ b3 (13)
.. ..
. .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn ≤ bm

and the non-negativity restrictions

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, . . . , xn ≥ 0

where a11 , a12 , . . . , amn , b1 , b2 , . . . , bm , c1 , c2 , c3 , . . . cm are real numbers. Thus in a normal maximization
problem all variables are non-negative and all constraints are inequalities of the ≤ type. The right hand
side constant in each constraint may be positive, negative or zero.
The dual of the normal maximization type problem (1) is the following LPP:
Minimize:
z = b1 y1 + b2 y2 + b3 y3 + · · · + bm ym
subject to the constraints:
a11 y1 + a21 y2 + a31 y3 + · · · + an1 ym ≥ c1
a12 y1 + a22 y2 + a32 y3 + · · · + am2 yn ≥ c2
a13 y1 + a23 y2 + a33 y3 + · · · + am3 ym ≥ c3 (14)
.. ..
. .
a1n y1 + a2n y2 + a3n y3 + · · · + amn ym ≥ cn
and the non-negativity restrictions

y1 ≥ 0, y2 ≥ 0, y3 ≥ 0, . . . , ym ≥ 0

A minimization problem such as problem (2) in which all variables are non-negative and all constraints
are inequalities of the ≥ type is called a normal minimization type problem. The right hand side constant
in each constraint may be positive, negative or zero. If the primal problem is the normal minimization
type problem (2) then its dual is the normal maximization type problem (1).
Comparison of Primal and Dual Problems:
(a) If the primal problem is of maximization type then the dual problem is of minimization type and
vice versa.
(b) The coefficients of the decision variables in the objective function of the primal problem are the
corresponding right hand side constants of the dual constraints. Similarly, the right hand side
constants of the primal constraints are the corresponding coefficients of the decision variables in the
objective function of the dual problem.
(c) The inequalities in the primal constraints are reversed in the dual constraints. If the ith primal
constraint is an equality constraint, then the dual variable yi is urs. If the ith primal decision
variable is urs then the ith dual constraint is an equality constraint.
(d) Each column of coefficients in the primal constraints is the corresponding row of coefficients in the
dual constraints. Thus the number of number of primal variables is equal to the number of dual
constraints.

12
(e) Each row of coefficients in the primal constraints is the corresponding column of coefficients in the
dual constraints. Thus there is one dual decision variable for each primal constraint.
(f) The dual of the dual is the primal. Thus any problem in a primal-dual pair may be considered as the
primal problem and the other as the dual problem.
Finding the Dual of a LPP:
If the primal problem is a maximization problem then express the problem as a normal
maximization type problem of the type (1) as follows:
i. Multiply any ≥ constraint throughtout by −1 to convert it into a ≤ type constraint.
ii. Replace any equality constraint by two inequality constraints, one ≥ type constraint and one ≤
type constraint. Then convert the ≥ type constraint into a ≤ type constraint, by multiplying
both sides by −1.
iii. Replace any urs variable xi by xi = x′i − x′′i , where x′i ≥ 0 and x′′i ≥ 0.
Then its dual is the normal minimization problem (2).
If the primal problem is a minimization problem then express the problem as a normal minimization
type problem of the type (2) as follows:
i. Multiply any ≤ constraint throughtout by −1 to convert it into a ≥ type constraint.
ii. Replace any equality constraint by two inequality constraints, one ≥ type constraint and one ≤
type constraint. Then convert the ≤ type constraint into a ≥ type constraint, by multiplying
both sides by −1.
iii. Replace any urs variable xi by xi = x′i − x′′i , where x′i ≥ 0 and x′′i ≥ 0.
Then its dual is the normal maximization problem (1).
Theorem 8.1: Duality Theorem
The following are the only possible relations between the primal and the dual problems:
(a) If one of the problems in a primal-dual pair has feasible solutions and a bounded objective function
and therefore has an optimal feasible solution, then so has the other problem. In this case the primal
and dual problems have equal optimal objective function values.
(b) If one of the problems in a primal-dual pair has feasible solutions and an unbounded objective
function and therefore has no optimal feasible solution, then the other problem has no feasible
solutions.
(c) If one of the problems in a primal-dual pair has no feasible solutions then the other problem has
either no feasible solutions or an unbounded objective function.
How to Find the Optimal Dual Solution From the Index Row of the Optimal Simplex Table:
Case I: The primal problem is a maximization problem:
(a) If the ith constraint in the primal problem is a ≤ type inequality then the optimal value of the dual
variable yi is equal to the net evaluation of the slack variable appearing in the ith primal constraint
in the optimal simplex table.
(b) If the ith constraint in the primal problem is a ≥ type inequality then the optimal value of the dual
variable yi is equal to the negative of the net evaluation of the surplus variable appearing in the ith
primal constraint in the optimal simplex table.
(c) If the ith constraint in the primal problem is an equation then the optimal value of the dual variable
yi is equal to the net evaluation of the artificial variable appearing in the ith primal constraint in the
optimal simplex table minus M.

Case II: The primal problem is a minimization problem:


(a) Same as Case I part 1.
(b) Same as Case I part 2.
(c) If the ith constraint in the primal problem is an equation then the optimal value of the dual variable
yi is equal to the net evaluation of the artificial variable appearing in the ith primal constraint in the
optimal simplex table plus M.

13

You might also like