by R. Grothmann This notebooks rewrites the problem of uniform approximation of a function on an interval as a problem of linear optimization. We compare the result with the Remez algorithm.
>function f(x) := exp(x);
We optimize in [-1,1]. For best approximants, it is best to take the zeros of the Chebyshev polynomials for discretization.
>x=equispace(-1,1,100); y=f(x);
The degree.
>n=5;
The optimization problem is terribly unstable. It works only for low degrees, and not too many points of discretization. To improve the situation, we use the Chebyshev polynomials as a base.
>A=cheb(x',0:n);
Now we set up the inequalities
>H1=A|(-1); b1=y'; eq1=-ones(size(b1)); >H2=A|1; b2=y'; eq2=ones(size(b1));
The target functional is h, and all coefficients but h are unrestricted.
>c=zeros(1,n+1)|1; restrict=c;
The simplex algorithm returns r=0, which means success.
>a=simplex(H1_H2,b1_b2,c,restrict=c,eq=eq1_eq2,>check)
1.26606588221 1.1303182116 0.271495347463 0.0443368771649 0.00547423567535 0.00054610487834 4.51694651668e-005
The target value of h is
>a[n+2]
4.51694651668e-005
Now we plot the error function.
>yb=(A.a[1:n+1])'; ... plot2d(x,y-yb):
Let us compare this with the Remez algorithm.
>{t,d,h}=remez(x,y,n)
4.5169465072e-005
>ybr=interpval(t,d,x); ... plot2d(x,y-ybr):
We could also fit in L2 sense. This is usually called quadratic regression.
>af=fit(A,y'); ... ybf=(A.af)'; ... plot2d(x,y-ybf):
The error is larger, of course, if measured in the maximum norm.
>max(abs(ybf-y))
4.82581886594e-005