Algebraic Method in Linear Programming
Algebraic Method in Linear Programming
1. Introduction
We have solved cases of simple linear programs: two
variables and three or five constraints. As the
the number of constraints increases, the graphic method proves to be
increasingly difficult to implement.
In the presence of three variables, we must call upon a
graphical representation in space, a very delicate exercise . . .
However, in practice, linear programs consist of several
dozens of variables and constraints. Thus, the use of a
general method becomes essential.
The simplex algorithm is the most widely used of the methods of
linear programming resolution. It was developed in 1948
by B. Dantzig. Since then, this algorithm has been the subject of certain
scientific articles and has served in the resolution of many
linear models.
2. Principle of the algorithm
When considering a linear program with more than 2 variables, the
graphical resolution becomes difficult, but the properties remain the same
same. If the objective function can have an optimal value in the
feasible region, it is located at one of the corners of the region
feasible. To reach the optimal value, only the vertices must
so be examined.
One could deduce that it is sufficient to calculate the value of the
objective function at each vertex and finally choose the best one.
big of these values; but this approach is impractical for the
large programs: a program with 100 variables and 10 constraints,
which is a medium-sized, if not small, program, would already have more
from 1,730 billion peaks!
An 'intelligent' procedure is therefore necessary: the algorithm of
simplex uses an iterative procedure that allows to find the
optimal value or, failing that, highlights that the feasible region
it is empty.
1
The principle of the simplex algorithm is as follows:
Phase I: Initialization Procedure
Determine a first vertex of the feasible region D. If this
procedure fails, this means that the realizableDest region is empty;
Phase II: Iterative Procedure
Move from a vertex of D to another neighboring vertex of D, in such a way that
improve the value of the objective function each time. This new
the summit will be adjacent to the previous one in the sense that they will both be
ends of an edge of D;
Stop Test: Stop Procedure
Two stopping tests determine whether to stop the calculation,
because we have reached an optimal solution or because it does not exist
to define the optimal solution, or if we should move towards a new one
summit.
Complete Treatment: Example
Let the linear program for maximization be in Canonical form.
PLC (Canonical Linear Program):
➔ Deviation variables
The resolution method we are studying in this chapter requires
that the model constraints be expressed in the form of equations
linear (Standard Form) instead of inequalities (Formulation
Canonical).
One can easily transform a linear inequality having a sign
in a linear equation by adding a non-negative variable called
difference variable as follows:
Let the constraint be:
x 4;x 0
Adding to the first member of the inequality, the margin variable
2
ever verifying 0, we obtain: x + e = 4 and then e = 4 - x
represents the gap between 4 and the quantity actually used x.
the slack variable introduced in the corresponding constraint
will then represent a timeout, which can be either positive or zero.
We add as many different gap variables as there are
constraints of the type.
We then obtain the linear maximization program in the form
StandardPLS (Standard Linear Programming):
3
we read the value of the basic variables:
4
a.-Choice of the entering variable
The question that arises is then: how to choose an edge along
which value should increase?
Algebraically, this question is equivalently phrased as
Which external variable will be entered into the database?
On a : Z = 300x1+ 500x2
To obtain a better solution, all you need to do is to run one of the
two variables x1or x2from zero value to a positive value.
It is therefore preferable to choose x.2which is worth 500 per unit while x1
is worth only 300.
One can thus hope to move 'faster' towards the optimal solution. By
against the value of x1remains nothing.
The selection criterion for the incoming variable is therefore as follows:
We choose the variable with the positive coefficient (of the function
the highest objective.
Note that to be able to apply this simple criterion (the choice of the
incoming variable), the objective function Z must be expressed in
function of the only out-of-base variables.
Here, we choose the variable x2
5
But to maintain a viable solution and not leave the region.
feasible, it is also necessary that the components remain positive. We
we must have x3> 0, x40 and x5In fact:
Constraint (C1) does not impose any restrictions on the increase of ex.2
because it does not contain x2In total, the most restrictive constraint
for x2thus (C2). Therefore, it is necessary to stop at the value x2= 6(it's a
response to question a) for which the variable x4becomes null and
going beyond would make it negative. So,
6
The first iteration is complete. The second basic solution can
can be summarized as follows:
Second iteration
Given Z = 3000 + 300x1-250.x2we therefore select for the variable
entrantex1Indeed, it is the variable with the highest positive coefficient, and
besides the only possible one.
To determine the output variable, let's study the variation of the basic variables.
in
7
8
9