Solving LPs graphically and by brute-force using Python

2 minute read

In order to understand better the properties of Linear Programs (LP), it can be helpful to look at some naive methods. In this post, I will solve the following LP graphically using Matplotlib and then by brute-force using NumPy.

Graphically

Let us first rewrite the constraints.

We have now three functions we can plot.

import numpy as np
import matplotlib.pyplot as plt

d = np.linspace(0, 10)

plt.plot(d, 10 - 2 * d, d, 6 - d, d, 4 + d)
plt.ylim([0, 10])
plt.xticks(np.arange(0, 10))
plt.yticks(np.arange(0, 10))
plt.legend(['f(x) = 10 - 2x', 'f(x) = 6 - x', 'f(x) = 4 + x'])
plt.show()

To find the feasible region we can look at the generated plot. The black lines were drawn manually using GIMP.

Feasible region

The black region defines our convex polyhedron. The fundamental theorem of linear programming tells us now that the corners are the maxima of the region. This is also intuitive, because these are the only points that cannot be written as a convex combination of other points (convex independent).

The extreme points are then .

If we plug the vectors into the objective function , we will find that .

The maximum of defined over our polyhedron is thus . This value occurs at the points where defines the convex hull of a set.

Brute-force

The second way to solve an LP uses the algebraic definition of extreme points. This time the corner points are obtained by solving a linear system of equations. Let’s rewrite the LP in matrix form.

Since the first three constraints are in , there are possible extreme points / intersections. A simple algorithm would be then

  1. Choose constraints (rows) from the matrix .
  2. Check if the constraints are linearly independent (e.g. and would be parallel).
  3. Solve the system of equations for .
  4. If satisfies all constraints, calculate , save result and go back to 1.

It is often easier to deal with LPs in standard form, because the matrix inequality gets replaced by . The matrix becomes . Before we had an overdetermined system (more equations than unknowns), now is underdetermined (more unknowns than equations).

The algorithm still works, but we have to choose columns out of instead of rows out of . There are possibilities.

An implementation in Python looks like this:

from itertools import combinations
import numpy as np

A = np.array([[2,1,1,0,0],[1,1,0,1,0],[-1,1,0,0,1]])
b = np.array([[10],[6],[4]])
c = np.array([[1],[1],[0],[0],[0]])

xs = []
for col in combinations(range(0, A.shape[1]), A.shape[0]):
    B = A[:, col]
    x_B = np.linalg.lstsq(B, b, rcond=-1)[0]
    if np.all(x_B >= 0) and np.allclose(B @ x_B, b):
        x = np.zeros((A.shape[1], 1))
        x[col,:] = x_B
        xs.append((x, c.T @ x))

xs.sort(key=lambda tup : tup[1], reverse=True)

print("x^* = {}\nc^Tx^* = {}".format(xs[0][0], xs[0][1]))

First, draw numbers from . Then solve for via least squares. This ensures that we always find an and we avoid checking for linear independence. The only numeric instability could come then from np.allclose(B @ x_B, b).

In the end, we obtain as before with .

Both methods I presented here are of course not usable in normal problems. To solve real-world LPs, there exists for example the library CVXOPT.

Categories:

Updated:

Comments