iconEuler Home

Optimization and Linear Programming

Euler has various methods to solve linear and nonlinear optimization
problems. Moreover, the methods of Maxima can also be used.

For a first example, we solve the following problem with the simplex
algorithm.

Maximize

17 - Introduction to Optimization

with

17 - Introduction to Optimization

First we define the necessary matrices and vectors.
>shortformat; A:=[1,1,1;2,0,1]
          1           1           1 
          2           0           1 
>b:=[1;1]
          1 
          1 
>c:=[2,1,1]
[ 2  1  1 ]
Now we have to maximize c.x under the condition A.x<=b. The Simplex
function returns the solution and a flag. If the flag is 0, the
algorithm found a solution.
>{x,r}:=simplex(A,b,c,>max); r,
0
>x
        0.5 
        0.5 
          0 
We can now compute the maximal value.
>c.x
[ 1.5 ]
With the >check flag, the simplex function will issue an error
message, if there is no solution. So there is no need to check the
return code r.
>simplex(A,b,c,>max,>check)
        0.5 
        0.5 
          0 
For Maxima, we have to load the simplex package first.
>&load(simplex);
Then we can solve the problem in symbolic form.
>&maximize_lp(2*x+y+z,[x>=0,y>=0,z>=0,x+y+z<=1,2*x+z<=1])
                       3              1      1
                      [-, [z = 0, y = -, x = -]]
                       2              2      2

An integer solution can be computed with the branch and bound
algorithm using the function "intsimplex".
>x:=intsimplex(A,b,c,>max,>check)
          0 
          1 
          0 
>c.x
[ 1 ]
The function ilpsolve from the LPSOLVE package (see below for details
and more examples) gets the same solution.
>ilpsolve(A,b,c,>max)
          0 
          1 
          0 
In the case of two variables, the function feasibleArea can compute
the corners of the feasible points, which we can then plot.
>A:=[1/10,1/8;1/9,1/11;1/12,1/7]; b:=[1;1;1]; xa:=feasibleArea(A,b); ...
 plot2d(xa[1],xa[2],filled=1,style="/",a=0,b=10,c=0,d=10);  ...
   insimg();

17 - Introduction to Optimization

>x=simplex(A,b,[5,8],max=1); fracprint(x);
    60/13 
    56/13 
>plot2d(x[1],x[2],add=1,points=1):

17 - Introduction to Optimization

Here is an integer solution.
>intsimplex(A,b,[5,8],max=1)
                  5 
                  4 
>plot2d(5,4,points=true,style="ow",add=true):

17 - Introduction to Optimization

The Gauss-Jordan Scheme

For a demonstration, we solve this system using the pivotize function
of Euler, which does one step of the Gauß or the Gauß-Jordan
algorithm. We use the Gauß-Jordan form, since it is well suited for
this problem.

The problem in the form

17 - Introduction to Optimization

is written in the following scheme.
>shortestformat; A := [1,1,1;4,0,1]; b := [1;1]; M := (A|b)_(-[2,1,1])
        1         1         1         1 
        4         0         1         1 
       -2        -1        -1         0 
For the output of the Gauß-Jordan scheme, we need the names of the
variables, and for book keeping we need an index vector 1:6 for the
current permutation of the variables.
>v := ["s1","s2","","x","y","z"]; n := 1:6;
With that information, we can print the scheme.
>gjprint(M,n,v)
                   x         y         z
        s1     1.000     1.000     1.000     1.000
        s2     4.000     0.000     1.000     1.000
              -2.000    -1.000    -1.000     0.000
The Simplex algorithm now requires to exchange x for s2. If we add the
variable names, the scheme is automatically printed after each step.

Note that the algorithm changes M and n. So, be careful not to run
this command twice!
>gjstep(M,2,1,n,v);
                  s2         y         z
        s1    -0.250     1.000     0.750     0.750
         x     0.250     0.000     0.250     0.250
               0.500    -1.000    -0.500     0.500
Next, we have to exchange y for s2.
>gjstep(M,1,2,n,v);
                  s2        s1         z
         y    -0.250     1.000     0.750     0.750
         x     0.250     0.000     0.250     0.250
               0.250     1.000     0.250     1.250
This finishes the algorithm, and we find the optimal values

17 - Introduction to Optimization

We can extract the solution with gjsolution().
>fracprint(gjsolution(M,n))
[ 1/4  3/4  0 ]

Non-linear Optimization

Euler has some methods for non-linear optimization.

For one dimensional minimization or maximization of a function, there
are fmin and fmax. These functions use a clever trisection, and work
for convex (resp. concave) functions.
>longformat; xm := fmin("x^x",0,1)
0.367879431154
>plot2d("x^x",0,1); plot2d(xm,xm^xm,points=true,add=true):

17 - Introduction to Optimization

We can also solve with Maxima for the zero of the derivative.
>&assume(x>0); &solve(diff(x^x,x)=0,x)
                                - 1   x
                          [x = E   , x  = 0]

So the point is actually exp(-1).
>exp(-1)
0.367879441171
Of course, we can use any zero finding method of Euler, computing
the derivative in Maxima.
>secant(&diff(x^x,x),0.1,1)
0.367879441171
A numerical solution would imply the numerical computation of
the derivative. So we program that for any expression or function
expr.
>function g(x,expr) ...
return dif(expr,x)
endfunction
Then we call the secant method, passing the expression as an
extra argument to g.
>secant("g",0.1,1;"x^x")
0.367879441172
Next, we try a multidimensional minimization. The function is

17 - Introduction to Optimization

>expr:="y^2-y*x+x^2+2*exp(x)";
If we plot it in 3D we can vaguely see the minimum.
>plot3d(expr,>contour,angle=-60°):

17 - Introduction to Optimization

It is better to use a contour plot. The minimum is clearly visible.
>plot2d(expr,niveau="auto",grid=4,>hue,>spectral):

17 - Introduction to Optimization

Now we try to solve the problem symbolically.
>expr &= y^2-y*x+x^2+2*exp(x)
                          2            x    2
                         y  - x y + 2 E  + x

However, Maxima fails here. The system is too complex.
>&solve(gradient(expr,[x,y]))
Maxima said:

algsys: tried and failed to reduce system to a polynomial in one variable; give up.
 -- an error. To debug this try: debugmode(true);


Error in :
&solve(gradient(expr,[x,y]))
                            ^
The Nelder-Mead method solves this very well. We need to write a
function, evaluation the expression at a 1x2 vector.
>function f([x,y]) &= expr
                          2            x    2
                         y  - x y + 2 E  + x

Now we can call Nelder-Mead, passing the expression to the function.
>res:=nelder("f",[0,0];expr)
[ -0.677309026763  -0.338655071019 ]
There is also a wrapper function for the Newton method in two
variables. We use it to find the critical point.
>mxmnewtonfxy(&diff(expr,x),&diff(expr,y),[1,0])
[ -0.677309305781  -0.338654652891 ]
We can even try to get a guaranteed inclusion for the solution.
First, we define a function, evaluating the derivative (gradient)
of f. We need to declare the global variable %e, which Maxima
puts into our expressions.
>function Jf ([x,y]) &= gradient(f(x,y),[x,y])
                               x
                     [- y + 2 E  + 2 x, 2 y - x]

Let us check our solution.
>Jf(res)
[ 1.25963816155e-006  -1.11527498425e-006 ]
We can also define the derivative of Jf which is the Hesse matrix
of f.
>function Hf ([x,y]) &= hessian(f(x,y),[x,y])
                          [    x          ]
                          [ 2 E  + 2  - 1 ]
                          [               ]
                          [   - 1      2  ]

Compute the Hesse matrix at the solution.
>Hf(res)
      3.01596424214                  -1 
                 -1                   2 
Now, check if it is positive definite. This is already a proof
for a local minimum.
>re(eigenvalues(Hf(res)))
[ 1.38635569693  3.62960854521 ]
Even better, we can use the interval Newton method in several
dimensions to obtain a guaranteed inclusion of the critical
point.
>ires:=inewton2("Jf","Hf",[0,0])
[ ~-0.6773093057811569,-0.6773093057811558~
~-0.3386546528905785,-0.33865465289057783~ ]

Newton-Barrier Method

This is a method for non-linear or linear minimization of a a convex
function with linear restrictions.

Let us try a random linear example first with 1000 conditions and two
variables.
>A=1+random(1000,2); b=100+random(1000,1)*100;
>A=A_-id(2); b=b_zeros(2,1);
>c=[-1,-1];
The Simplex algorithm gets the following result.
>simplex(A,b,c,eq=-1,<restrict,>min,>check)
      33.3940435502 
      20.1801084471 
For the Newton-Barrier method, we need the gradient and the Hessian of
f.
>function f([x,y]) &= -x-y;
>function df([x,y]) &= gradient(f(x,y),[x,y]);
>function Hf([x,y]) &= hessian(f(x,y),[x,y]);
The function can produce a history of results. The iteration is quite
good.
>X=newtonbarrier("f","df","Hf",A,b,[1,1],>history); X,
                  1                   1 
      1.31362321133        1.3142609586 
      26.7800661388       26.7725713513 
      27.2347627955       26.3325425497 
       33.287578081       20.2863340555 
      33.3638114306       20.2103045092 
       33.393083785       20.1810619151 
      33.3932348534       20.1809162794 
       33.394033734       20.1801181695 
        33.39403447       20.1801175176 
      33.3940434507       20.1801085456 
      33.3940434578       20.1801085394 
      33.3940435492        20.180108448 
      33.3940435493        20.180108448 
      33.3940435502       20.1801084471 
      33.3940435502       20.1801084471 
      33.3940435502       20.1801084471 
Let us try a nonlinear target function. We maximize

17 - Introduction to Optimization

Since we need a convex function, we take

17 - Introduction to Optimization

instead, and minimize this function.
>function f([x,y]) &= 1/(x*y);
>function df([x,y]) &= gradient(f(x,y),[x,y]);
>function Hf([x,y]) &= hessian(f(x,y),[x,y]);
>newtonbarrier("f","df","Hf",A,b,[1,1])
[ 26.7966508428  26.7712465198 ]

Linear Fit

Next, we want to demonstrate a linear fit.

First we assume that we have measurements of y=x with a normally
distributed error.
>x:=-1:0.1:1; y:=x+0.05*normal(size(x));
>plot2d(x,y,style="[]",a=-1,b=1,c=-1.1,d=1.1,points=1);
Now we fit a polynomial of degree 1 to this.
>p:=polyfit(x,y,1), plot2d("evalpoly(x,p)",add=1,color=4):
[ 0.000658325995838  1.00358581674 ]

17 - Introduction to Optimization

We compute the sum of the squares of the errors, and the maximum
error.
>sum((evalpoly(x,p)-y)^2), max(abs(evalpoly(x,p)-y))
0.0281163272308
0.0746518209681
Assume, we want to minimize the maximum error. We use Nelder-Mead
again.
>function f(p,x,y) := max(abs(evalpoly(x,p)-y))
>q:=neldermin("f",[0,0];x,y)
[ -0.00269898311207  0.989367526319 ]
Let us compare. This time, the first error must be larger, and
the second smaller.
>sum((evalpoly(x,q)-y)^2), max(abs(evalpoly(x,q)-y))
0.0299096595698
0.0708999848647

LPSOLVE Package

Euler does also contain the LPSOLVE library, ported to Euler by Peter
Notebaert.

  ../reference/lpsolve

The function ilpsolve uses this package. If you need the package in
other form, you will have to start it by loading lpsolve.e. See
Notebaert's examples in the user directory for more information.

We solve a first problem. The syntax is similar to intsimplex. The
restrictions are

17 - Introduction to Optimization

and we maximize

17 - Introduction to Optimization

>longformat;
>f := [-1, 2]; // target function
>A := [2, 1;-4, 4]; // restriction matrix
>b := [5; 5]; // right hand side
>e := -[1; 1]; // type of restrictions (all <=)
Solve with LP solver.
>x := ilpsolve(A,b,f,e,max=1)
                  1 
                  2 
Compare with the slower intsimplex function.
>intsimplex(A,b,f,e,max=1)
                  1 
                  2 
Draw the feasible area.
>xa := feasibleArea(A,b); ...
 plot2d(xa[1],xa[2],filled=1,style="/",a=0,b=3,c=0,d=3); ...
 plot2d(x[1],x[2],points=1,add=1); insimg();

17 - Introduction to Optimization

We can use LPSOLVE also for non-integer programming.
>f = [50, 100]; ...
 A = [10, 5; 4, 10; 1, 1.5]; ...
 b = [2500; 2000; 450]; ...
 e = [-1; -1; -1]; ...
 x=ilpsolve(A,b,f,e,i=[],max=1) // i contains the integer variables
              187.5 
                125 
>intsimplex(A,b,f,e,max=1,i=[])
              187.5 
                125 
>xa:=feasibleArea(A,b); ...
 plot2d(xa[1],xa[2],filled=1,style="/",a=0,b=300,c=0,d=300); ...
 plot2d(x[1],x[2],points=1,add=1):

17 - Introduction to Optimization

Next, we have a lower bound A.x >= b, and some upper bounds for
the variables. Furthermore, we minimize the target function. lpsolve
can add such restrictions in a special form.
>f = [40, 36]; ...
 A = [5, 3]; ...
 b = [45]; ...
 e = 1; ...
 {v,x} = ilpsolve(A,b,f,e,vub=[8,10],max=0);
>v
                  8 
                  2 
>x
392
To solve this in intsimplex, we can add the restrictions to the
matrix A.
>intsimplex(A_id(2),b_[8;10],f,e_[-1;-1],max=0)
                  8 
                  2 
The feasible area looks like this. It is always computed with
inequalities of the form A.x <= b. So we must modify our matrix
again.
>xa:=feasibleArea((-A)_(id(2)),(-b)_[8;10]); ...
 plot2d(xa[1],xa[2],filled=1,style="/",a=0,b=12,c=0,d=12);  ...
   insimg();

17 - Introduction to Optimization

Now, we restrict only one of the variables to integer.
>f = [10, 6, 4]; ...
 A = [1.1, 1, 1; 10, 4, 5; 2, 2, 6]; ...
 b = [100; 600; 300]; ...
 e = [-1; -1; -1]; ...
 ilpsolve(A,b,f,e,i=[2],max=1)
      35.4545454545 
                 61 
                  0 
>intsimplex(A,b,f,e,max=1,i=[2])
      35.4545454545 
                 61 
                  0 
Let us try a huge problem with random restrictions.
>seed(1); A=random(100,10); b=random(100,1)*10000; c=ones(1,10);
First the non-restricted solution.
>ilpsolve(A,b,c,>max,i=[])
                  0 
                  0 
      359.617713064 
                  0 
                  0 
                  0 
                  0 
      337.613704493 
                  0 
                  0 
Then the integer solution.
>ilpsolve(A,b,c,>max)
                  0 
                  0 
                361 
                  0 
                  0 
                  0 
                  0 
                335 
                  0 
                  0 

Euler Home