This is an introduction to the way Euler handles matrices for Linear Algebra. You should be familiar with the Euler matrix language by now. See: 01 - Introduction to the Matrix Language Let us start with a simple matrix.
>A:=[1,2,3;4,5,6;7,8,9]
1 2 3 4 5 6 7 8 9
"det" computes the determinant, which surprisingly is 0 for this matrix.
>det(A)
0
Consequently, the kernel of A will contain non-zeros elements.
>kernel(A)
1 -2 1
Let us check, using the dot product.
>A.kernel(A)
0 0 0
By the way, the Euler matrix language provides a very simple way to produce this matrix. We generate the vector 1:9, and put its entries into the rows of a 3x3 matrix.
>redim(1:9,3,3)
1 2 3 4 5 6 7 8 9
Now we try to solve a linear system.
>b:=[3;4;3]
3 4 3
This fails, since the matrix is singular.
>x:=A\b
Determinant zero! Error in : x:=A\b ^
Singularity is checked with the internal epsilon. In this case, the numerical solver recognizes the matrix as singular. Euler has another solver, which applies the pseudoinverse of A to b. The solution is the best possible in distance with least norm. But it is not a proper solution.
>x:=svdsolve(A,b); fracprint(x)
-5/3 0 5/3
The error is still substantial, but it is the smallest possible error, nevertheless.
>norm(A.x-b)
0.816496580928
This is as close as we can get to b.
>A.x
3.33333333333 3.33333333333 3.33333333333
The fit function finds a solution with the same error. It is very much faster. But the norm of the solutions is not the smallest possible norm.
>x:=fit(A,b); fracprint(x), norm(A.x-b)
-10/3 10/3 0 0.816496580928
We change one element of A.
>A[3,3]:=0
1 2 3 4 5 6 7 8 0
Now, A is regular.
>det(A)
27
And we can properly solve the linear system.
>x:=A\b; x,
-2.11111111111 2.22222222222 0.222222222222
Often, the fractional output format is preferred. We set fractional output with "fracformat", and reset to normal with "lonformat", or we simply use "fracprint(x)". Euler will compute a continued fraction approximation for the output.
>fracprint(x);
-19/9 20/9 2/9
Indeed, this is the solution.
>norm(A.x-b)
0
Let us try a larger problem. We generate a 100x100 random matrix.
>A:=normal(100,100);
The determinant is huge.
>det(A)
-1.80259873511e+078
We multiply with a random vector.
>x:=random(rows(A),1); b=A.x;
The norm of the error is usually quite good. We need to switch to longestformat to avoid the error being rounded to 0 for the output.
>longestformat; norm(A\b-x), longformat;
8.981611028519362e-014
In the following commands we demonstrate how to do the Gram-Schmid method by hand for the vectors v and w.
>v:=[1;1;1]; w:=[1;2;-1]; v1:=w-(w'.v)/(v'.v)*v; fracprint(v1);
1/3 4/3 -5/3
We could continue like this. But we take the cross product to get the third vector.
>v2:=crossproduct(v,w),
-3 2 1
Now, the normalized vectors v, v1, v2 should form an orthogonal matrix.
>M:=(v/norm(v))|(v1/norm(v1))|(v2/norm(v2))
0.57735026919 0.154303349962 -0.801783725737 0.57735026919 0.617213399848 0.534522483825 0.57735026919 -0.77151674981 0.267261241912
Let us check this.
>norm(M.M'-id(3))
0
Let us project the vector b to the plane generated by v and w. The coefficients of v and w minimize l1*v1+l2*v2-b. The function "fit" does just this projection.
>A:=v|w; b:=[1;0;0]; l:=fit(A,b); fracprint(l);
2/7 1/14
We can compute the projection A.l.
>z:=A.l; fracprint(z);
5/14 3/7 3/14
And we compute the distance of z to this plane.
>norm(z-b)
0.801783725737
We could also use our matrix M to get the same result. The first two columns span the plane and are orthonormal.
>l:=M'.b; l=l[[1,2]]; M[:,[1,2]].l
0.357142857143 0.428571428571 0.214285714286
Another idea is to use the projection to v2. The matrix of this projection is v2.v2', after v2 is normed to 1.
>P:=id(3)-v2.v2'/norm(v2)^2; P.b
0.357142857143 0.428571428571 0.214285714286
Maxima knows a lot of functions for linear Algebra too. We can define the Matrix as data for symbolic expressions.
>A &:= [1,2,3;4,5,6;7,8,9]
1 2 3 4 5 6 7 8 9
The rank of the matrix is 2, because it is singular.
>&rank(A)
2
The result of the Gauß algorithm applied to A can be computed with echelon. The echelon form can be used to determine a base of the lines of A taking the non-zero lines of A. Moreover, the independent columns of the echelon form, are the columns of A, which form a basis of the columns of A.
>&echelon(A)
[ 1 2 3 ] [ ] [ 0 1 2 ] [ ] [ 0 0 0 ]
Here is one way to make a linear system from the matrix A, and to solve it. The system is A.[x,y,z]=[1,1,1].
>&transpose(A.[x,y,z]-[1,1,1]), sol &= solve(%[1],[x,y,z])
[ 3 z + 2 y + x - 1 6 z + 5 y + 4 x - 1 9 z + 8 y + 7 x - 1 ] [[x = %r1 - 1, y = 1 - 2 %r1, z = %r1]]
The list of symbols is contained in a temporary list named %rnum_list, which we can acces with temp(1). We can assign a value for each symbol as follows.
>&sol with temp(1)=0
[[x = - 1, y = 1, z = 0]]
For an invertible matrix, there is also a function lusolve, which directly solves Ax=b.
>&lusolve([1,2,3;4,5,6;7,9,10],[1;1;1])
[ 0 ] [ ] [ - 1 ] [ ] [ 1 ]
With Maxima, we can use symbolic matrices containing variables or other expressions.
>A &= [1,2,3;4,5,6;7,8,a]
[ 1 2 3 ] [ ] [ 4 5 6 ] [ ] [ 7 8 a ]
We can easily see, that a=9 makes a singular matrix.
>&determinant(A)|factor
- 3 (a - 9)
We can now invert this matrix symbolically. This works unless the determinant is 0.
>Ainv &= ratsimp(inv(A)*det(A))
[ 5 a - 48 24 - 2 a - 3 ] [ ] [ 42 - 4 a a - 21 6 ] [ ] [ - 3 6 - 3 ]
We can print this with latex using the comment line "maxima: Ainv"
Let us do some Eigenvalue problems. First we generate a diagonal matrix of size 3x3, with the specified values on the 0-th diagonal.
>D:=diag(3,3,0,[1,2,3])
1 0 0 0 2 0 0 0 3
We generate a random matrix similar to the matrix A.
>M:=random(3,3); A:=M.D.inv(M)
5.61970807974 3.54267118987 -5.38696231669 5.63575907854 6.88812359997 -8.29590933715 5.76664081816 5.13392591568 -6.50783167972
Indeed, the Eigenvalues of A are just 1,2,3. However, we get complex Eigenvalues, which need to make real.
>real(eigenvalues(A))
[ 1 2 3 ]
We can also use the function eigen to compute both the Eigenvalues and the Eigenspaces.
>{l,v}:=eigen(A); real(v),
0.21156916574 0.799639906292 0.543533398278 0.723424676181 0.0865277720892 0.555264457266 0.657187359892 0.594212895285 0.629486161447
Then inv(v).A.v should be the diagonal matrix of eigenvalues.
>norm(inv(v).A.v-diag(3,3,0,l))
0
Euler can also do singular value decompositions.
>{p,s,q}:=svd(A); s
[ 17.8852434419 1.79653910227 0.186732417741 ]
These are the eigenvalues of A'.A, but Euler uses a special algorithm for this.
>sqrt(abs(eigenvalues(A'.A)))
[ 0.186732417741 1.79653910227 17.8852434419 ]
We get A as a composition of two orthogonal matrices, and a diagonal matrix.
>norm(p.diag(size(A),0,s).q'-A)
0
Let us demonstrate, how to solve a linear system by hand. Assume, we have the following system.
>fracformat(10); A:=redim(1:12,3,4)
1 2 3 4 5 6 7 8 9 10 11 12
>pivotize(A,1,1)
1 2 3 4 0 -4 -8 -12 0 -8 -16 -24
>pivotize(A,3,2)
1 0 -1 -2 0 0 0 0 0 1 2 3
>swapRows(A,2,3)
1 0 -1 -2 0 1 2 3 0 0 0 0
There is the function echelon, which does all this automatically.
>fracformat(10); A:=redim(1:12,3,4); echelon(A)
1 0 -1 -2 0 1 2 3
We can also compute eigenvalues with Maxima. The function returns the eigenvalues and their multiplicities.
>longformat; A&:=[1,2;3,4]
1 2 3 4
>&eigenvalues(A)
5 - sqrt(33) sqrt(33) + 5 [[------------, ------------], [1, 1]] 2 2
Here are the eigenvectors.
>&eigenvectors(A)
5 - sqrt(33) sqrt(33) + 5 [[[------------, ------------], [1, 1]], 2 2 3 - sqrt(33) sqrt(33) + 3 [[[1, ------------]], [[1, ------------]]]] 4 4
Euler has a special data typa, and algorithms for large and sparse system. For more information refer to the following notebook See: 19 - Introduction to Large Systems The following does not use these algorithms, and the matrix is full. Yet, Euler can produce a good solution within seconds.
>A=normal(1000,1000); b=sum(A); x=A\b; >totalmax(A.x-b)
4.32009983342e-012