ecuation roots

ECUATION ROOTS:(OPEN METHODS,CLOSED METHODS,GRAPHIC METHODS)

F(X)=0

OPEN METHODS:

SECANT METHOD
FIXED POINT METHOD
NEWTON RAPHSON METHOD

SECANT METHOD




Secant method
View more documents from uis.

FIXED POINT

When attempting to solve the equation f(x) = 0, it would be wonderful if we could rewrite the equation in a form which gives explicitly the solution, in a manner similar to the familiar solution method for a quadratic equation. While this does not occur for the vast majority of equations we must solve, we can always find a way to re-arrange the equation f(x) = 0 in the form:
X= G(X)

Finding a value of x for which x = g(x) is thus equivalent to finding a solution of the equation f(x) = 0.
The function g(x) can be said to define a map on the real line over which x varies, such that for each value of x, the function g(x) maps that point to a new point, x on the real line. Usually this map results in the points x and X being some distance apart. If there is no motion under the map for some x = xp, we call xp a fixed point of the function g(x). Thus we have xp = g(xp), and it becomes clear that the fixed point of g(x) is also a zero of the corresponding equation f(x) = 0.

Suppose we are able to choose a point x0 which lies near a fixed point, xp, of g(x), where of course, we do not know the value of xp (after all, that is our quest here). We might speculate that under appropriate circumstances, we could use the iterative scheme:
xn+1 = g(xn)
where n=0,1,2,3,... , and we continue the iteration until the difference between successive xn is as small as we require for the the precision desired. To that level precision, the final value of xn approximates a fixed point of g(x), and hence approximates a zero of f(x).

CLOSED METHODS:

BISECTION
FALSE POSITION

FALSE POSITION

false position method (Latin: regula falsi) An iterative method for finding a root of the nonlinear equation f(x) = 0. It employs the same formula as the secant method, but retains at each stage the two most recent estimates that bracket the root in order to guarantee convergence. Modifications to this general strategy are required to avoid one end-point remaining fixed and slow convergence. The resulting methods are both fast and reliable.


BISECTION METHOD







SOLUTION METHODS:

DIRECT METHODS
ITERATIVE METHODS

ITERATIVE METHODS:(JACOBI,GAUSS SEIDEL,GAUSS SEIDEL CON RELAJACION)

The term ``iterative method'' refers to a wide range of techniques that use successive approximations to obtain more accurate solutions to a linear system at each step. In this book we will cover two types of iterative methods. Stationary methods are older, simpler to understand and implement, but usually not as effective. Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective. The nonstationary methods we present are based on the idea of sequences of orthogonal vectors. (An exception is the Chebyshev iteration method, which is based on orthogonal polynomials.)

Stationary iterative method: Iterative method that performs in each iteration the same operations on the current iteration vectors. Nonstationary iterative method: Iterative method that has iteration-dependent coefficients. Dense matrix: Matrix for which the number of zero elements is too small to warrant specialized algorithms. Sparse matrix: Matrix for which the number of zero elements is large enough that algorithms avoiding operations on zero elements pay off. Matrices derived from partial differential equations typically have a number of nonzero elements that is proportional to the matrix size, while the total number of matrix elements is the square of the matrix size.

The rate at which an iterative method converges depends greatly on the spectrum of the coefficient matrix. Hence, iterative methods usually involve a second matrix that transforms the coefficient matrix into one with a more favorable spectrum. The transformation matrix is called a preconditioner. A good preconditioner improves the convergence of the iterative method, sufficiently to overcome the extra cost of constructing and applying the preconditioner. Indeed, without a preconditioner the iterative method may even fail to converge.

GAUSS SEIDEL AND SOR

GAUSS SEIDEL

In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite.

Gaussseidel
View more documents from uis.


Gaussseidel
View more documents from uis.

JACOBI METHOD

Jacobi method
View more documents from uis.


DIRECT METHODS:(GAUSSIAN ELIMINATION,GAUSS THROUGH PIVOT,GAUSS JORDAN,ESPECIAL SYSTEMS)

ESPECIALS SYSTEMS:(THOMAS,CHOLESKY)

THOMAS METHOD

Thomasmethod
View more documents from uis.


CHOLESKY METHOD

In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a symmetric, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices and is an example of a square root of a matrix. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations


Gaussian-Jordan Elimination

Gauss jordan
View more documents from uis.

Gauss Jordan Elimination Through Pivoting




DARCY`S LAW

L2 darcys law
View more documents from uis.

Gaussian elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss–Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications.