ecuation roots

ECUATION ROOTS:(OPEN METHODS,CLOSED METHODS,GRAPHIC METHODS)

F(X)=0

OPEN METHODS:

SECANT METHOD
FIXED POINT METHOD
NEWTON RAPHSON METHOD

SECANT METHOD




Secant method
View more documents from uis.

FIXED POINT

When attempting to solve the equation f(x) = 0, it would be wonderful if we could rewrite the equation in a form which gives explicitly the solution, in a manner similar to the familiar solution method for a quadratic equation. While this does not occur for the vast majority of equations we must solve, we can always find a way to re-arrange the equation f(x) = 0 in the form:
X= G(X)

Finding a value of x for which x = g(x) is thus equivalent to finding a solution of the equation f(x) = 0.
The function g(x) can be said to define a map on the real line over which x varies, such that for each value of x, the function g(x) maps that point to a new point, x on the real line. Usually this map results in the points x and X being some distance apart. If there is no motion under the map for some x = xp, we call xp a fixed point of the function g(x). Thus we have xp = g(xp), and it becomes clear that the fixed point of g(x) is also a zero of the corresponding equation f(x) = 0.

Suppose we are able to choose a point x0 which lies near a fixed point, xp, of g(x), where of course, we do not know the value of xp (after all, that is our quest here). We might speculate that under appropriate circumstances, we could use the iterative scheme:
xn+1 = g(xn)
where n=0,1,2,3,... , and we continue the iteration until the difference between successive xn is as small as we require for the the precision desired. To that level precision, the final value of xn approximates a fixed point of g(x), and hence approximates a zero of f(x).

CLOSED METHODS:

BISECTION
FALSE POSITION

FALSE POSITION

false position method (Latin: regula falsi) An iterative method for finding a root of the nonlinear equation f(x) = 0. It employs the same formula as the secant method, but retains at each stage the two most recent estimates that bracket the root in order to guarantee convergence. Modifications to this general strategy are required to avoid one end-point remaining fixed and slow convergence. The resulting methods are both fast and reliable.


BISECTION METHOD







SOLUTION METHODS:

DIRECT METHODS
ITERATIVE METHODS

ITERATIVE METHODS:(JACOBI,GAUSS SEIDEL,GAUSS SEIDEL CON RELAJACION)

The term ``iterative method'' refers to a wide range of techniques that use successive approximations to obtain more accurate solutions to a linear system at each step. In this book we will cover two types of iterative methods. Stationary methods are older, simpler to understand and implement, but usually not as effective. Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective. The nonstationary methods we present are based on the idea of sequences of orthogonal vectors. (An exception is the Chebyshev iteration method, which is based on orthogonal polynomials.)

Stationary iterative method: Iterative method that performs in each iteration the same operations on the current iteration vectors. Nonstationary iterative method: Iterative method that has iteration-dependent coefficients. Dense matrix: Matrix for which the number of zero elements is too small to warrant specialized algorithms. Sparse matrix: Matrix for which the number of zero elements is large enough that algorithms avoiding operations on zero elements pay off. Matrices derived from partial differential equations typically have a number of nonzero elements that is proportional to the matrix size, while the total number of matrix elements is the square of the matrix size.

The rate at which an iterative method converges depends greatly on the spectrum of the coefficient matrix. Hence, iterative methods usually involve a second matrix that transforms the coefficient matrix into one with a more favorable spectrum. The transformation matrix is called a preconditioner. A good preconditioner improves the convergence of the iterative method, sufficiently to overcome the extra cost of constructing and applying the preconditioner. Indeed, without a preconditioner the iterative method may even fail to converge.

GAUSS SEIDEL AND SOR

GAUSS SEIDEL

In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite.

Gaussseidel
View more documents from uis.


Gaussseidel
View more documents from uis.

JACOBI METHOD

Jacobi method
View more documents from uis.


DIRECT METHODS:(GAUSSIAN ELIMINATION,GAUSS THROUGH PIVOT,GAUSS JORDAN,ESPECIAL SYSTEMS)

ESPECIALS SYSTEMS:(THOMAS,CHOLESKY)

THOMAS METHOD

Thomasmethod
View more documents from uis.


CHOLESKY METHOD

In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a symmetric, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices and is an example of a square root of a matrix. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations


Gaussian-Jordan Elimination

Gauss jordan
View more documents from uis.

Gauss Jordan Elimination Through Pivoting




DARCY`S LAW

L2 darcys law
View more documents from uis.

Gaussian elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss–Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications.

NUMERICAL APPROACH

MATEMATICALS MODELS

In the first place we can define a Mathematical Model as a formulation or equation that expresses the main features of a Physical system or a Process in Mathematical terms.
Mathematical models are used particularly in natural sciences and engineering disciplines ( Physics, Biology, Meteorology, and Electrical Engineering) but also in social sciences ( Economics, Sociology and Political Science), physicists, engineers, computer scientists,and economists use mathematicals models mos extensively.
Mathematical models can take many forms,including but not limited dynamical systems, statistical models, differential equations, or game - theoretical models. These and other types of models can overlap with a given model involving a variety of abstract structures.
In General the model can be represented with the next functional relationship :
dependent variable = f (independent variable, parameters, forcing functions)
The dependent Variable is a feature that usually represent the behaviour or system's state, the Independents variables are dimensions as the time and space through which is determinate the system's behaviour. The Parameters are reflective of properties or system's composition. Finally the forcing functions are external influences acting upon the system.
we can obtain analytical or numerical solutions of a problem.The Firsts usally are accurate and the latter are aproximate.

ROOTS OF ECUATIONS

Let’s take a look at a very powerful tool in Mathcad that started a revolution in computational analysis. It
started in the late 1980’s when I was an undergraduate. A company called Wolfram created a computer
program called Mathematica. This was the first computer code that could solve algebraic and calculus
equations symbolically. That is, if I had an equation that said x*y = z, Mathematica could tell me that
y = z / x, without ever needing me to assign numbers to x, y, or z. It also was able to solve integrals, differential
equations, and derivatives symbolically. This was an incredible advance, and opened the doors
to a whole new world of programming, numerical methods, pure mathematics, engineering, and science.
Since then, a competing code called Maple was developed and sold itself to other software companies to
include in their programs.
The end result: Mathcad uses Maple as a solving engine in the background (you don’t see it) to solve
problems symbolically. Here we will look at a brief example of how to use this capability in the context
of solving a system of linear equations.

In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the roots of areal-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures.
Given a function ƒ(x) and itsderivative ƒ '(x), we begin with a first guess x0. Provided the function is reasonably well-behaved a better approximation x1 is

X1=X0-(F(xo)/F'(xo))

The process is repeated until a sufficiently accurate value is reached:

Xn+1=Xn-(F(Xn)/(F'(Xn))

An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.


The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.
Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.

That is F'(Xn)=rise/run=delY/delX= F(Xn)-0/Xn-Xn+1

Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive

Xn+1=Xn-(F(Xn)/F'(Xn))

We start the process off with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem .The method will usually converge, provided this initial guess is close enough to the unknown zero, and that ƒ'(x0) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.

DEFINITION

Branch of applied mathematics that studies methods for solving complicated equations using arithmetic operations, often so complex that they require a computer, to approximate the processes of analysis .The arithmetic model for such an approximation is called an algorithm, the set of procedures the computer executes is called a program, and the commands that carry out the procedures are called code. An example is an algorithm for deriving π by calculating the perimeter of a regular polygon as its number of sides becomes very large. Numerical analysis is concerned not just with the numerical result of such a process but with determining whether the error at any stage is within acceptable bounds.

HISTORY.

The field of numerical analysis predates the invention of modern computers by many centuries.Linear interpolation was already in use more than 2000 years ago.Many great mathematicians of the past were preoccupied by numerical analysis,as is obvious from the names of important algorithms like Newton's method,Lagrange interpolation polynomial,Gaussian elimination or Euler's method.

To facilitate computatios by hand ,large books were produced with formulas and tables of data such as interpolation points and function coefficients.Using these tables,often calculated out to 16 decimal places or more for some functions,one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions.The canonical work in the field es the NIST publication edited by Abramowitz and stegun, a 1000-plus page bookof a very large numbre of commonly used formulas and functions and very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation.These calculators evolved into electronic computers in the 1940's, and it was then found that these computers were also useful for administrative purposes.But the invention of the computers also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.