NUMERICAL APPROACH

MATEMATICALS MODELS

In the first place we can define a Mathematical Model as a formulation or equation that expresses the main features of a Physical system or a Process in Mathematical terms.
Mathematical models are used particularly in natural sciences and engineering disciplines ( Physics, Biology, Meteorology, and Electrical Engineering) but also in social sciences ( Economics, Sociology and Political Science), physicists, engineers, computer scientists,and economists use mathematicals models mos extensively.
Mathematical models can take many forms,including but not limited dynamical systems, statistical models, differential equations, or game - theoretical models. These and other types of models can overlap with a given model involving a variety of abstract structures.
In General the model can be represented with the next functional relationship :
dependent variable = f (independent variable, parameters, forcing functions)
The dependent Variable is a feature that usually represent the behaviour or system's state, the Independents variables are dimensions as the time and space through which is determinate the system's behaviour. The Parameters are reflective of properties or system's composition. Finally the forcing functions are external influences acting upon the system.
we can obtain analytical or numerical solutions of a problem.The Firsts usally are accurate and the latter are aproximate.

ROOTS OF ECUATIONS

Let’s take a look at a very powerful tool in Mathcad that started a revolution in computational analysis. It
started in the late 1980’s when I was an undergraduate. A company called Wolfram created a computer
program called Mathematica. This was the first computer code that could solve algebraic and calculus
equations symbolically. That is, if I had an equation that said x*y = z, Mathematica could tell me that
y = z / x, without ever needing me to assign numbers to x, y, or z. It also was able to solve integrals, differential
equations, and derivatives symbolically. This was an incredible advance, and opened the doors
to a whole new world of programming, numerical methods, pure mathematics, engineering, and science.
Since then, a competing code called Maple was developed and sold itself to other software companies to
include in their programs.
The end result: Mathcad uses Maple as a solving engine in the background (you don’t see it) to solve
problems symbolically. Here we will look at a brief example of how to use this capability in the context
of solving a system of linear equations.

In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the roots of areal-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures.
Given a function ƒ(x) and itsderivative ƒ '(x), we begin with a first guess x0. Provided the function is reasonably well-behaved a better approximation x1 is

X1=X0-(F(xo)/F'(xo))

The process is repeated until a sufficiently accurate value is reached:

Xn+1=Xn-(F(Xn)/(F'(Xn))

An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.


The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.
Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.

That is F'(Xn)=rise/run=delY/delX= F(Xn)-0/Xn-Xn+1

Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive

Xn+1=Xn-(F(Xn)/F'(Xn))

We start the process off with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem .The method will usually converge, provided this initial guess is close enough to the unknown zero, and that ƒ'(x0) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.

DEFINITION

Branch of applied mathematics that studies methods for solving complicated equations using arithmetic operations, often so complex that they require a computer, to approximate the processes of analysis .The arithmetic model for such an approximation is called an algorithm, the set of procedures the computer executes is called a program, and the commands that carry out the procedures are called code. An example is an algorithm for deriving π by calculating the perimeter of a regular polygon as its number of sides becomes very large. Numerical analysis is concerned not just with the numerical result of such a process but with determining whether the error at any stage is within acceptable bounds.

HISTORY.

The field of numerical analysis predates the invention of modern computers by many centuries.Linear interpolation was already in use more than 2000 years ago.Many great mathematicians of the past were preoccupied by numerical analysis,as is obvious from the names of important algorithms like Newton's method,Lagrange interpolation polynomial,Gaussian elimination or Euler's method.

To facilitate computatios by hand ,large books were produced with formulas and tables of data such as interpolation points and function coefficients.Using these tables,often calculated out to 16 decimal places or more for some functions,one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions.The canonical work in the field es the NIST publication edited by Abramowitz and stegun, a 1000-plus page bookof a very large numbre of commonly used formulas and functions and very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation.These calculators evolved into electronic computers in the 1940's, and it was then found that these computers were also useful for administrative purposes.But the invention of the computers also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.