Linear Programming - Graphical Method

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

The Graphical Method: An Example

Consider the following linear program:


Maximize 4x
1
+3x
2
Subject to:
2x
1
+3x
2
6 (1)
3x
1
+2x
2
3 (2)
2x
2
5 (3)
2x
1
+x
2
4 (4)
x
1
, x
2
0 ,
where, for ease of reference, the four functional constraints have been labelled as (1), (2),
(3), and (4). Our goal is to produce a pair of values for x
1
and x
2
that (i) satises all
constraints and (ii) has the greatest objective-function value.
A pair of specic values for (x
1
, x
2
) is said to be a feasible solution if it satises all con-
straints. For example, (x
1
, x
2
) = (0, 0) and (x
1
, x
2
) = (1, 1) are feasible solutions, while
(x
1
, x
2
) = (1, 1) and (x
1
, x
2
) = (1, 2) are not. Note that the objective-function value
(OFV) associated with solution (0, 0) is equal to 0 (= 4 0 +3 0) and that the OFV for
solution (1, 1) is equal to 7 (= 41+31). Hence, (1, 1) is the better of these two feasible
solutions. Since it is impossible to generate and compare all feasible solutions one by one,
we must develop a systematic method to identify the best, or optimal, solution.
The basic idea behind the graphical method is that each pair of values (x
1
, x
2
) can be rep-
resented as a point in the two-dimensional coordinate system. With such a representation,
we will be able to visualize the set of all feasible solutions as a graphical region, called the
feasible region or the feasible set, and then to identify the optimal solution (assuming it
exists).
To construct the feasible region, we examine the constraints one at a time, starting with
constraint (1).
Suppose rst that the inequality in constraint (1) is replaced by an equality. Then, it is
easily seen that the set of points satisfying 2x
1
+ 3x
2
= 6 corresponds to a straight line
in the two-dimensional plane. To plot this line, we need to identify the coordinates of two
distinct points on this line. The standard approach for doing this is to set, alternatingly,
one of the two variables to the value 0 and then solve the resulting one equation with one
unknown to obtain the corresponding value of the other variable. Thus, if we let x
1
= 0,
then x
2
must equal to 2, since x
2
= (6 2x
1
)/3. This yields the point (0, 2). Similarly, by
setting x
2
= 0, we obtain the point (3, 0) (since x
1
= (6 3x
2
)/2). The resulting line that
1
passes through these two points is shown in Figure LP-1. We shall refer to this line as line
(1).
Observe that line (1) cuts the plane into two half-planes. All points on one side of this line
(including the line itself) satisfy 2x
1
+ 3x
2
6; whereas on the other side, 2x
1
+ 3x
2
6.
Note that (x
1
, x
2
) = (0, 0) satises 2x
1
+ 3x
2
6, and that (0, 0) lies in the lower-left
direction of line (1). Therefore, we are only interested in the lower-left portion of the plane.
(What if (x
1
, x
2
) = (0, 0) happens to satisfy a given constraint as an equality?) This is also
shown in Figure LP-1. In this gure, next to line (1), we also placed two arrows that point
toward the lower-left direction. This is intended to indicate what will be called the feasible
direction. In summary, after discarding points that violate constraint (1), we are left with
the half plane that sits at the lower-left side of the line dened by 2x
1
+ 3x
2
= 6.
Thus, each constraint makes a cut on the two-dimensional plane, and repeating this
process (do this yourself) for the remaining ve constraints (the lines x
1
= 0 and x
2
= 0
correspond to the x
2
-axis and the x
1
-axis, respectively) yields the feasible set. This is
shown in Figure LP-2, where we have shaded the feasible region. Note that constraint (3)
is redundant, in that its removal will not have an impact on the shape of the feasible set.
The next question is: Which point within the feasible set is optimal? To answer this, we
now look at the objective function. Observe that for any given constant c, the set of points
satisfying 4x
1
+ 3x
2
= c is again a straight line; and more importantly, that by varying
c, we can generate a family of lines (one for each c) with the same slope (i.e., members of
this family are parallel). Hence, we can choose an arbitrary c, draw its corresponding line
4x
1
+3x
2
= c, and then visualize the entire family by sliding this line in a parallel manner.
In Figure LP-3, we show two such lines: 4x
1
+3x
2
= c
1
and 4x
1
+3x
2
= c
2
, where c
1
12
and c
2
16. Since both lines do not intersect the feasible region, points on these two lines
are not feasible. Note however that the line with the smaller c, namely c
1
, is closer to the
feasible set. Therefore, if we decrease c further, from c
1
, then at some point the resulting
parallel lines will eventually touch the feasible region at exactly one point. More precisely,
this takes place when the family of objective-function lines hits the intersection of lines (1)
and (4), a corner point (or an extreme point) of the feasible set. It is now clear that this
corner point, marked as D in Figure LP-3, must be optimal, since we need to lower c still
further to reach the other points within the feasible set. Finally, we note that point D
has the coordinates (3/2, 1) (Why?) and therefore, it has an objective-function value of 9
(= 4 (3/2) + 3 1). This completes the solution of the problem.
More generally, observe that regardless of the slope of the objective function, the above
process will always lead to an optimal corner-point solution. (This is true even when the
objective-function line rst touches the feasible set at an entire edge; in such a case, we
actually have two optimal corner-point solutions at the end points of that edge.) This
2
implies that we can be assured that an optimal solution (whenever it exists) can always be
found at one of the corner points of the feasible set. This fact is remarkable, as it says that
we have managed to reduce the task of nding an optimal solution within the entire feasible
set, which is a set containing an uncountable number of points, to that of nding an optimal
solution within the set of corner points, which is a set containing a nite (why?) number of
points. In this particular example, there are only ve corner-point solutions, namely points
A, B, C, D, and E (see Figure LP-3). Therefore, the reduction in the search eort is truly
signicant.
The above observation naturally suggests the following simple two-step procedure for solving
a linear program:
Procedure Search
Step 1: Identify the coordinates of all corner points of the feasible set.
Step 2: Evaluate the objective function at all of these corner points and pick the best
solution.
In applying Step 1 to the particular problem above, the location of the corner points can be
identied via the graph. For problems with three decision variables, one can still attempt
to draw three-dimensional graphs. With four or more variables, it becomes impossible to
implement the procedure. Therefore, in order for this search procedure to work, we must
develop a general method for determining the coordinates of the corner points without
relying on a graphical representation of the feasible set; and this will be our next task.
In Figure LP-3, the coordinates of all ve corner points have been explicitly specied. For
example, the coordinates of the optimal corner point, point D, is given as (3/2, 1). How does
one determine these coordinates? Of course, we can attempt to determine the coordinates
of a given point on a graph by visual inspection, but in general, this cannot be expected
to yield accurate answers.
More constructively, recall that point D is sitting at the intersection of lines (1) and (4).
Thus, we can algebraically solve the system of two dening equations,
2x
1
+3x
2
= 6 (1)
2x
1
+x
2
= 4 , (4)
to obtain the coordinates of this intersection. (In other words, the coordinates of this
intersection must satisfy both equation (1) and equation (4).) To do this, subtract equation
(4) from equation (1) to yield 2x
2
= 2, implying that x
2
= 1. Next, substitution of x
2
= 1
into equation (1) (or alternatively, into equation (4)) shows that x
1
= 3/2. This veries
3
that (x
1
, x
2
) = (3/2, 1) is indeed the precise location of point D. You should now verify the
coordinates of points A, B, C, and E yourself.
It is important to realize that this analysis still depends on the availability of the graph.
Specically, we need to know the identify of the two dening equations for each corner
point (i.e., we need to know that equations (1) and (4) together dene point D, equations
(1) and (2) together dene point C, equation (2) and x
1
= 0 together dene point B,
and so on). And we need to know that only these ve (Out of how many? See below.)
particular combinations of two dening equations give rise to corner points of the feasible
region. Therefore, a fundamental question is: Can we free ourselves from the reliance on
the graph? A little bit of reection now leads us to the following procedure:
Procedure Corner Points
Step 1: From the given set of six equations (including x
1
= 0 and x
2
= 0), choose an
arbitrary combination of two equations. Solve this chosen set of two equations to
obtain the coordinates of their intersection. (A unique solution may not exist in
general; see discussion below.)
Step 2: If the solution in Step 1 satises all other constraints, then accept it as a corner-
point solution; otherwise, discard the combination.
Step 3: Repeatedly loop through Steps 1 and 2 until all possible combinations are ex-
hausted.
Despite its brutal avor, it turns out that this procedure, indeed, will allow us to generate
the coordinates of all corner-point solutions.
To convince ourselves about the validity of this procedure, consider, for example, the choice
of equations (1) and (3):
2x
1
+3x
2
= 6 (1)
2x
2
= 5 . (3)
From equation (3), we have x
2
= 5/2, which implies that x
1
= (6 3x
2
)/2 = 3/4. Since
the resulting solution (x
1
, x
2
) = (3/4, 5/2) has a negative value for x
2
, it is not feasible.
Therefore, the combination of equations (1) and (3) does not lead to a corner-point solution.
Now, with six equations, it is easily seen that the total number of subsets of two equations
is equal to

6
2

=
6!
2!4!
= 15 .
After cycling through all 15 of these combinations and discarding combinations that do not
yield a feasible solution, it can be shown that only ve combinations remain; moreover,
4
these ve combinations correspond precisely to the pairs of dening equations for points A,
B, C, D, and E. This establishes (empirically, at least) the validity of the above procedure.
Procedure Search and Procedure Corner Points form the backbone of the Simplex method,
which is an algebraic procedure for solving linear programs with any (nite) number of
variables and constraints. We will return to a full development of this method in a later
section.
Discussion
In general, the feasible region of a linear program may be empty. Procedure Search is
meaningful only if the feasible set is not empty. Discovery of an empty feasible set will be
discussed later.
Consider the linear program: Maximize x
1
+x
2
, subject to x
1
, x
2
0. Since the values of x
1
and x
2
are allowed to be arbitrarily large, this problem does not have an optimal solution.
Note that the feasible region has exactly one corner point, at (0, 0); and that this corner
point is not optimal. This clearly is a contradiction to Procedure Search. A linear program
of this type is said to be unbounded; we will rene the statement of Procedure Search later
to deal with such examples.
In general, a given pair of straight lines on the plane may not have a unique intersection
point. This could occur if the two lines are parallel, in which case there is no intersection,
or if the two lines coincide, in which case every point on the line is an intersection point.
Algebraically, this means that a given pair of equations may not have a unique solution.
For example, the equation pair
x
1
+2x
2
= 3
x
1
+2x
2
= 6
has no solution; and the equation pair
x
1
+2x
2
= 3
2x
1
+4x
2
= 6
has an innite number of solutions. This issue is relevant in Step 1 of Procedure Corner
Points, whose statement will also be rened later.
5

You might also like