SuanShu, a Java numerical and statistical library

com.numericalmethod.suanshu.matrix.doubles.matrixtype.sparse.solver.iterative
Interface IterativeSolver

All Known Implementing Classes:
BiconjugateGradientSolver, BiconjugateGradientStabilizedSolver, ConjugateGradientNormalErrorSolver, ConjugateGradientNormalResidualSolver, ConjugateGradientSolver, ConjugateGradientSquaredSolver, GaussSeidelSolver, GeneralizedConjugateResidualSolver, GeneralizedMinimalResidualSolver, JacobiSolver, MinimalResidualSolver, QuasiMinimalResidualSolver, SteepestDescentSolver, SuccessiveOverrelaxationSolver, SymmetricSuccessiveOverrelaxationSolver

public interface IterativeSolver

Iterative methods for solving N-by-N (or non-square) linear systems Ax = b involve a sequence of matrix-vector multiplications. Starting with an initial guess at the solution, each multiplication or iteration returns a new estimate of the solution and clearly takes O(N2) operations for dense matrices. Hopefully the estimates get closer and closer to the solution, so that after k iterations, the estimates converge to a satisfactory solution (within a tolerance).

For dense matrix A, an iterative method may take O(kN2) operations to converge. However, for large sparse systems, matrix-vector multiplication takes only O(nnz) where nnz is the number of non-zeros in the sparse matrix. Therefore, iterative approach can be much faster than traditional direct methods of solving linear systems.

Here is some guidelines for choice of methods. For Hermitian problems, use CG or MINRES for positive definite problems, and MINRES for indefinite problems. Using stationary methods like Jacobi, Gauss-Seidel, SOR, or SSOR, can avoid inner-products required in CG or MINRES algorithms. This gives some savings in the cost of an iteration, but the price of in terms of number of iterations usually outweighs the savings, unless a good preconditioner is applied. For non-Hermitian problems, the choice is not so easy. If matrix-vector multiplication is extremely expensive, GMRES is probably the choice because it requires the fewest multiplications. Otherwise, BiCG or QMR can be used as well. For stablility, we usually recommend QMR over BiCG. If transpose for a matrix is not available, for example, only matrix-vector multiplication can be approximated by some functions, transpose-free methods like CGS or BiCGSTAB can be used. In addition, CG methods for solving over-determined systems CGNR, and under-determined systems CGNE are also available. Please check out the classes which implement this interface for other available iterative solvers.

The use of Preconditioner can improve the rate of convergence of an iterative method. A preconditioner transforms a linear system into one that is equivalent in the sense that it has the same solution, but the transformed system has more favorable spectral properties which affect convergence rate. Usually, a preconditioner M is chosen to approximate the coefficient matrix A, but is easier to solve. For example,

M-1Ax = M-1b
has the same solution as the original system Ax = b, but the spectral properties of its coefficient matrix M-1A may be more favorable. Another way of preconditioning a system is
M1-1AM2-1(M2x) = M1-1b
The matrices M1 and M2 are called left- and right preconditioners, respectively. There are 3 kinds of preconditioning: left, right, or split. To use left-preconditioning, leave M2 as IdentityPreconditioner. Similarly, leave M1 as IdentityPreconditioner when using right-preconditioning.


Nested Class Summary
static class IterativeSolver.ConvergenceFailure
          This exception is thrown by solve(com.numericalmethod.suanshu.matrix.doubles.matrixtype.sparse.solver.iterative.IterativeSolver.Problem) when the iterative algorithm detects a breakdown or fails to converge.
static class IterativeSolver.Problem
          This class models the problem of solving a system of linear equations (Ax = b) using an iterative method.
 
Method Summary
 Vector solve(IterativeSolver.Problem problem)
          Solve iteratively Ax = b until the solution is close enough, i.e., the norm of residual (b - Ax) is less than or equal to the specified iteration.
 Vector solve(IterativeSolver.Problem problem, IterationMonitor monitor)
          Solve iteratively Ax = b until the solution is close enough, i.e., the norm of residual (b - Ax) is less than or equal to the specified iteration.
 

Method Detail

solve

Vector solve(IterativeSolver.Problem problem)
             throws IterativeSolver.ConvergenceFailure
Solve iteratively
Ax = b
until the solution is close enough, i.e., the norm of residual (b - Ax) is less than or equal to the specified iteration.

Parameters:
problem - the problem of solving Ax = b
Returns:
the computed solution for the problem
Throws:
IterativeSolver.ConvergenceFailure - if the algorithm fails to converge

solve

Vector solve(IterativeSolver.Problem problem,
             IterationMonitor monitor)
             throws IterativeSolver.ConvergenceFailure
Solve iteratively
Ax = b
until the solution is close enough, i.e., the norm of residual (b - Ax) is less than or equal to the specified iteration.

In each iteration, the newly computed iterate is added to the IterationMonitor for statistics or diagnostic purpose.

Parameters:
problem - the problem of solving Ax = b
monitor - an IterationMonitor instance
Returns:
the computed solution for the problem
Throws:
IterativeSolver.ConvergenceFailure - if the algorithm fails to converge

SuanShu, a Java numerical and statistical library

Copyright © 2011 Numerical Method Inc. Ltd. All Rights Reserved.