Assignment 2 Solutions
Assignment 2 Solutions
Assignment 2 Solutions
Assignment 2 – Chapter 2
Instructions and policy: Undergraduate students should handle in the five exercises that are marked
with [U]. Graduate students should handle in the five exercises that are marked with [G]. All other exercises
are optional for keen students and should not be handled in. The assignment should be submitted
electronically in Canvas. Late submission policy: 20% of total points will be deducted per hour. Each student is
encouraged to solve all the exercises in the assignment to practice for the exams.
Students are strongly encouraged to work in groups of two on homework assignments. To find a partner
you can post on the “Discussions” section in Canvas. Only one file should be submitted for both group
members. In order to submit the assignment for your group please follow these steps in Canvas: Step 1. Click
on the “People” tab, then on “Groups”, and join one of the available groups named “Assignments Group 1”,
“Assignments Group 2”, . . . ; Step 2. When also your partner has joined the same group, one of the two can
submit the assignment by clicking on the “Assignments” tab, then on the assignment to be submitted, and
finally on “Submit assignment”. The submission will count for everyone in your group.
Groups must work independently of each other, may not share answers with each other, and solutions
must not be copied from the internet or other sources. If improper collaboration is detected, all groups
involved will automatically receive a 0. Students must properly give credit to any outside resources they use
(such as books, papers, etc.). In doing these exercises, you must justify all of your answers and cite every
result that you use. You are not allowed to share any content of this assignment.
Solution: For x1 = (1/2, 1, 1/2, 0, 0) the only possible candidate to be a basis is {1, 2, 3}, since
there are exactly m = 3 positive variables. We check if A1 , A2 and A3 are linearly independent:
−1 1 1
det 3 2 7 = 0.
−2 4 6
We conclude that A1 , A2 and A3 are not linearly independent, thus x1 is not a basic solution.
The vector x2 = (1, 2, 0, 0, 0) has 2 < m = 3 positive components. To construct a basis for
x2 , we select A1 and A2 , that are associated to the positive components of x2 . One can easily
check that these two columns are linearly independent. However, to form a basis, we still need
one more column of A. We already know that A1 , A2 and A3 are linearly dependent. We then
check if A1 , A2 and A4 are linearly independent:
−1 1 2
det 3 2 −1 = 20.
−2 4 2
Page 1 of 10
Thus A1 , A2 and A4 are linearly independent. Moreover, x2 satisfies the equality constraints
and x23 = x25 = 0. Thus x2 is the unique solution of the system B(x1 , x2 , x4 ) = b, where B is
the above matrix and b = (1, 7, 6). This proves that x2 is a basic solution and that {1, 2, 4} is
a basis associated to it. Also, x2 is degenerate basic solution. The other basis associated to x2
is {1, 2, 5}. In fact x23 = x24 = 0 and:
−1 1 −2
det 3 2 4 = −19.
−2 4 −1
Again, since x2 satisfies the equality constraints and x23 = x24 = 0, it must be that x2 is the
unique solution of the equality system B(x1 , x2 , x5 ) = b, where B is the above matrix and
b = (1, 7, 6).
The vector x3 = (1, 0, 0, 1, 0) does not satisfy the second equality constraint, thus it is not a
basic solution.
(b) (5 points) Say if the vector is a vertex and, if so, determine an objective function that is uniquely
minimized at the vertex.
0 > c0 x2 − c0 x
= (0, 7, 14, 3, 1)0 x2 − (0, 7, 14, 3, 1)0 x + (0, 0, 1, 0, 1)0 x2 − (0, 0, 1, 0, 1)0 x
= 14 − 14 + (0, 0, 1, 0, 1)0 x2 − (0, 0, 1, 0, 1)0 x.
This implies that c̄ = (0, 0, 1, 0, 1) is also uniquely minimized at x2 . Another objective function
that is uniquely minimized at x2 is c̃ = (0, 0, 1, 1, 0).
Page 2 of 10
Solution: Let L = {λu + (1 − λ)v : 0 ≤ λ ≤ 1} and F = {z ∈ P : a0i z = bi , i = 1, . . . , n − 1}. Note
that u, v ∈ F . We want to show that L = F .
First we show that L ⊆ F . To prove this, we show that each element of L is also in F . Let z ∈ L.
Then z = λu + (1 − λ)v for some λ ∈ [0, 1]. We have that z ∈ F , since for all i = 1, . . . , n − 1:
Now we show that F ⊆ L. First, note that G is one-dimensional and that it contains u and v. Thus
G is the line passing through u and v, i.e. the set of points that are affine combinations of u and v:
Clearly F ⊆ G and L ⊆ G. In particular L is the set of points that are convex combinations of u
and v, thus u and v are extreme points of L. We prove that for any point z ∈ G, if z ∈ P then
z ∈ L. This shows that if z ∈ G ∩ P = F , then z ∈ G ∩ L = L. Let z ∈ G. By the contrapositive
argument, we prove that if z ∈ / L, then z ∈/ P . For the purpose of contradiction, we assume that
there is a z ∈
/ L and z ∈ P . Since z ∈ G, we have that z = λu + (1 − λ)v and since z ∈ / L either (i)
λ > 1; or (ii) λ < 0. In case (i) u is a convex combination of z and v, contradicting the fact that u
is an extreme point of P . In case (ii) v is a convex combination of z and u, contradicting the fact
that v is an extreme point of P .
Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
A mapping f is called affine if it is of the form f (x) = Ax + b, where A is a matrix and b is a vector.
Let P and Q be polyhedra in Rn and Rm , respectively. We say that P and Q are isomorphic if there
exist affine mappings f : P 7→ Q and g : Q 7→ P such that g(f (x)) = x for all x ∈ P , and f (g(y)) = y
for all y ∈ Q. (Intuitively, isomorphic polyhedra have the same shape.)
(a) If P and Q are isomorphic, show that there exists a one-to-one correspondence between their extreme
points. In particular, if f and g are as above, show that x is an extreme point of P if and only if
f (x) is an extreme point of Q.
Solution: Consider the affine function f (x) = Ax + b, that maps a vector x ∈ P to a vector
f (x) ∈ Q. We show that if x∗ is an extreme point of P , then f (x∗ ) is an extreme point of Q.
By contradiction, suppose that f (x∗ ) is not an extreme point of Q. There there are y 1 , y 2 ∈ Q
distinct from f (x∗ ) such that
f (x∗ ) = λy 1 + (1 − λ)y 2 . (1)
Now, consider the affine function g(y) = Dy + c, that maps a vector y ∈ Q to a vector g(y) ∈ P .
We also have g(f (x∗ )) = x∗ . From (1) we have
Note that g(y 1 ) and g(y 2 ) belong to P . Moreover, they are both distinct from x∗ , since otherwise
f (x∗ ) = f (g(y 1 )) = y 1 and analogously f (x∗ ) = f (g(y 2 )) = y 2 , while we know that y 1 and y 2
are distinct from f (x∗ ). The above chain of inequalities shows that x∗ is not an extreme point
of P , a contradiction.
Page 3 of 10
Analogously, we can prove that if y ∗ is an extreme point of Q, then g(y ∗ ) is an extreme point
of P .
Solution: In order to finish the exercise, we need to find two affine functions f, g such that
g(f (x)) = x for all x ∈ P , and f (g(y)) = y for all y ∈ Q. Define f : P → Q and g : Q → P like
this:
I 0 x
f (x) = n×n x + n×1 = ,
A −b Ax − b
x
g(x, z) = In×n 0n×k = x,
z
where In×n is the identity matrix with n rows and columns, 0n×1 is the zero vector with n
components and 0n×k is the zero matrix with n rows and k columns.
Clearly f and g are affine functions. Moreover:
Show that if y ∈ C, then there existPcoefficients λ1 , . . . , λn ≥ 0 such that (i)at most m of the
n
coefficients are nonzero, and (ii) y = i=1 λi Ai .
Hint: Consider the polyhedron
( n
)
X
n
Q = (λ1 , . . . , λn ) ∈ R : λi Ai = y, λ1 , . . . , λn ≥ 0 .
i=1
Page 4 of 10
Solution: Consider the polyhedron
( n n
)
X X
0 n
Q = (λ1 , . . . , λn ) ∈ R : λi Ai = y, λ1 , . . . , λn ≥ 0, λi = 1 .
i=1 i=1
This is a polyhedron in standard form with m+1 equality constraints. Moreover, Q0 is nonempty
since y ∈ P . Therefore, Q0 has at least one extreme point, that is a basic feasible solution.
Since at a basic feasible solution we must have n active constraints, it follows that at least
n − (m + 1) of the coefficients λi are zero, and therefore m + 1 of them are nonzero.
Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Consider the standard form polyhedron P = {x ∈ Rn : Ax = b, x ≥ 0}, and assume that the rows of
the matrix A are linearly independent.
(a) Suppose that two different bases lead to the same basic solution. Show that the basic solution is
degenerate.
Solution: Suppose that two different bases B and B̃ lead to the same basic solution x. The
two bases are different, thus they differ in at least one column. Let B(i) be a basic variable of
B that is nonbasic in B̃. Then xB(i) = 0. Moreover all variables not in B are nonbasic, thus
0-valued in x. This shows that x is degenerate, since it has n − m + 1 0-valued components.
(b) Consider a degenerate basic solution. Is it true that it corresponds to two or more distinct bases?
Prove it or give a a counterexample.
Solution: This is not necessarily true. Consider the standard form polyhedron P = {(x, y) ∈
R2 : x + y = 0, x − y = 0, x, y ≥ 0}. Note that the two equality constraints are linearly
independent. Moreover, (0, 0) is the only basic solution, and it is degenerate. The only possible
basis associated to (0, 0) includes variables x and y.
(c) Suppose that a basic solution is degenerate. Is it true that there exists a distinct adjacent basic
solution which is degenerate? Prove it or give a a counterexample.
Solution: The countexample in (b) shows that the claim is false. Moreover, if b = 0, since the
rows of A are linearly independent and there are m − n nonbasic variables that are 0-valued, it
follows that the only (degenerate) basic solution is the zero vector.
Exercise 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Consider the standard form polyhedron P = {x ∈ Rn | Ax = b, x ≥ 0}. Suppose that the matrix A, of
dimension m × n, has linearly independent rows, and that all basic feasible solutions are nondegenerate.
Let x be a vector in P that has exactly m positive components.
(a) Show that x is a basic feasible solution.
Hint: Look at the proof of Theorem 2.6 (b) → (a) in the book. The idea of the proof has been
explained in class.
Solution: Suppose wlog xm+1 = · · · = xn = 0. We have to show that the m equality con-
straints and the n − m active nonnegativity constraints are linearly independent. By contradic-
tion, suppose only n − (m − k) constraints are linearly independent, with k < m. Then there
is a vector d ∈ Rn not all zero such that Ad = 0 and dm+1 = · · · = dn = 0. This implies that
Page 5 of 10
x̃ = x + d is such that Ax̃ = b and x̃m+1 = · · · = x̃n = 0 for any ∈ R, i.e. all the constraints
that were active in x are still active in x̃.
We choose such that x̃ ≥ 0. In particular we increase until one of the positive variables,
wlog x̃m , becomes 0. If x̃ is feasible for all > 0, consider −d instead of d. Thus we must have
that for some finite > 0 x̃m = 0, since P is in standard form, thus it does not contain a line.
We have found a feasible solution x̃ with at most m − 1 positive entries. There are n + 1 active
constraints in x̃, and we show that n − (m − k) + 1 among them are linearly independent. In
fact, since xm > 0 and x̃m = 0 we have that dm < 0, thus e0m d < 0. On the other hand, if em
was a linear combination of A and em+1 , . . . , en , we would have e0m d = 0, since Ad = 0 and
e0i d = 0 for i = m + 1, . . . , n.
If m−k > 1, we iterate this procedure, until we find a feasible solution x̄ with at most k positive
entries. In x̄ there are n + m − k active constraints, n of which are linearly independent. The n
linearly independent active constraints define an equality system associated to a basic feasible
solution. This basic feasible solution is the unique solution of the equality system, thus it must
coincide with x̄. Since x̄ has k < m positive entries, we have contradicted the assumption that
all basic feasible solutions are nondegenerate. Thus x was a basic feasible solution.
BxB = b (6)
xB > 0. (7)
Suppose rk(B) = k < m. Then we can find a non-zero vector dB ∈ Rm such that BdB = 0.
Let x̃B = xB + dB , > 0. We have B x̃B = b, thus the equality constraints (6) remain active
(and satisfied) for any . We want to move along this halfline until one of the positive variables
in (7) becomes zero. If the whole halfline is contained in P , i.e. dB ≥ 0, then we consider the
direction −dB (the polyhedron isnin standard
o form, thus it does not contain a line). Formally,
x
B(i)
we choose = mini=1,...,m:dB(i) <0 − dB(i) . Note that 0 < < ∞.
In x̃ ≥ 0 there are at most m − 1 positive variables. Let j ∈ {B(1), . . . , B(m)} be such that
x̃j = 0. Wlog we assume j = B(m) and we set B̃ = {B(1), . . . , B(m − 1)}. If k < m − 1, we can
repeat the reasoning: find a non-zero vector dB̃ ∈ Rm−1 such that B̃dB̃ = 0 and find another
vector x̄ ≥ 0 with at most m − 2 positive variables.
In the end, we have a feasible solution with exactly k positive entries, wlog {B(1), . . . , B(k)}.
Moreover, since A has full row-rank, we can find m−k columns in N , wlog {N (1), . . . , N (m−k)}
that, together with {B(1), . . . , B(k)}, span Rm . Let these columns form the current basis
matrix B (k) , that is nonsingular. The corresponding basic feasible solution has xi = 0 for
i ∈/ {B(1), . . . , B(k)} ∪ {N (1), . . . , N (m − k)} and it is the unique solution of the equality
system B (k) xB (k) = b.
Since the last vector that we obtained in our procedure is feasible, satisfies all equality con-
straints, and has zero-valued entries for indices not in {B(1), . . . , B(k)}, it follows that this
vector coincides with the basic feasible solution associated to B (k) , and it has exactly k < m
positive entries. This contradicts the assumption that all basic feasible solutions are nondegen-
erate, and shows that B has to be nonsingular.
(b) Show that the result of part (a) is false if the nondegeneracy assumption is removed.
Page 6 of 10
Solution: Consider for example the polyhedron
P = {(x1 , x2 , x3 ) | x1 + x2 = 1, x1 + x2 + x3 = 1, x1 , x2 , x3 ≥ 0},
with n = 3 and m = 2, whose degenerate vertices are (0, 1, 0) and (1, 0, 0). The vector
(1/2, 1/2, 0) has exactly m positive entries and is feasible, however, it is not a vertex.
Exercise 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 points
Let P be a bounded polyhedron in Rn , let a be a vector in Rn , and let b be some scalar. We define
Q = {x ∈ P | a0 x = b}.
Show that every extreme point of Q is either an extreme point of P or a convex combination of two
adjacent extreme points of P . Hint: Look at the proof of Theorem 2.6 (b) → (a) in the book. The idea
of this proof has been explained in class.
Page 7 of 10
Then, we have that y and z are vertices of P and x̂ can be expressed as a convex combination:
α β
x̂ = z+ y.
α+β α+β
Solution: “Equivalent” means that we can convert a linear program (P) into a linear program in
standard form (P’), solve (P’), and retrieve an optimal solution of (P) (if one exists). A vertex y of
(P’) is the unique minimum for some objective function c. However, when we map this vertex back
to (P ) we are not guaranteed to obtain a vertex. In particular, if the feasible set of (P) contains a
line, we have that there is no vertex.
For example consider the linear program (P)
min x + y
s.t. x + y ≥ 0,
min x+ − x− + y + − y −
s.t. x+ − x− + y + − y − − s = 0
x+ , x− , y + , y − , s ≥ 0.
Page 8 of 10
The only vertex is x+ = x− = y + = y − = s = 0. This vertex is optimal in (P’). However, any point
in the halfline (0, 0, 0, 0, 0) + λ(1, 1, 1, 1, 0), λ ≥ 0 is also optimal in (P’), but not a vertex. All the
points in this halfline would be mapped to (0, 0) in (P), that is an optimal solution of (P). On the
other hand, there are optimal solutions of (P), that are mapped to other optimal solutions of (P 0 )
that are not vertices. For example (1, −1) is mapped to (1, 0, 0, 1, 0), that is not a vertex.
Finally, remark that none of the objective functions that is uniquely minimized at (0, 0, 0, 0, 0):
1. x+ + x− + y + + y − ,
2. x+ + x− + y + + s,
3. x+ + x− + y − + s,
4. x+ + y + + y − + s,
5. x− + y + + y − + s,
6. all the conic combinations of the above (linear combinations with nonnegative coefficients),
Page 9 of 10
which is equivalent to the system:
Rearranging, we have:
1
x2 ≤ 35 (70 − 22x1 )
1
x2 ≤ 4 (9 − 3x1 )
1
x2 ≤ 62 (102 − 27x1 )
x2 ≤ − 54
x2 ≤ −x1 − 1
1
x2 ≤ 17 (6x1 − 9)
Since there is no lower bound for x2 , we see Π1 (P ) = R by noticing for any x1 ∈ R, there exist an
x2 such that (x1 , x2 ) ∈ Π2 (P ), and hence P is nonempty.
Page 10 of 10