**Foundations of Numerical Methods — the Pythonic way**

--

**Background**

Numerical Methods by definition mean the application of a certain analytical technique to solve a certain problem. From quantitative modelling point of view, there are many problems for which we do not have a ready formula (i.e., a closed form solution). For such instances, we have to resort to some form of a numerical approximation to arrive at the most optimal solution to the problem. Numerical techniques are extensively used in a variety of domains including fluid mechanics, engineering, pure / applied mathematics, finance etc. Numerical methods are used for various problems including optimization, root finding, interpolation etc.

While there are multiple numerical methods that are applied to a variety of problems, in this article, we focus on only a couple of them. The article has been written to help readers develop a good understanding of the numerical method, along with the implementation of the same in Python. For clarity, there are charts and figures included to help relate with the logic of a certain numerical method. Further, we have attempted to implement such methods in Python via two ways:

a. The hard way (i.e., pure Python approach)

b. Using a ready Python library

While most of the times on projects, we use ready libraries available in scripting tools, it’s always a good idea to explore the pure implementation of the same to get a good understanding of the logic behind a certain technique. Having clarity on the foundations of numerical methods also helps the user decide which method to use for which problem.

**Root Finding Problem**

We apply numerical methods to a root finding problem. Root finding methods follow an iterative procedure where we define a certain starting point or a ball park estimation of the root. The main aim is to find a closest numerical approximation of the root. The estimation of the root may either converge to a solution or may not converge at all.

For this exercise, we assume a certain polynomial equation and attempt to solve for its roots using a couple of numerical techniques and implement the same via Python. Further, we assume that the function is continuous and differentiable. Having a function that is free from discontinuities plays an important role in arriving at the roots.

Root finding exercise is a form of an approximation (i.e., it’s hard to find the exact value). Therefore, it is imperative that we define a certain upper and lower bounds for searching the root and also define a tolerance threshold that may be acceptable to us.

**Numerical Method 1: Bisection Method**

This is a straightforward method for finding roots. For example if we have a continuous function ** f(x) ,**then we attempt to solve for a value of

**that satisfies f(x) = 0.**

*x*Assume that we have two points: ** j** and

**such that j < k and f(j) < 0 and f(k) > 0. Points j and k lie along the continuous function. We take a mid-point of**

*k**j*and

*k*say

**, such that m = (j+k)/2. The bisection method will evaluate the value f(m).**

*m*A general diagram for this method is as below:

The bisection method assumes that the root will lie somewhere between points j and k. We say this because we know that f(j) < 0 and f(k) > 0.

If we find f(m) = 0 OR very close to zero under a tolerance level then we say the root is found. If f(m) < 0 then we say that the root exists along the interval j and m otherwise on the interval m and k. On the second iteration of the method, m is replaced as either j or k, and the new shortened interval is used to evaluate the next value of m. This process of shrinking the interval continues until we find a root.

## Advantages of the bisection method:

a. It is easy to implement

b. It does not require calculation of derivatives (this at times is an advantage because there are certain functions for which computing derivatives may be challenging).

## Shortcomings of the bisection method:

a. An absence of the derivative means it takes longer to converge to a solution

b. If the interval j and k is very large it take longer to arrive at a solution.

Python implementation of the bisection method:

**Using Pure Python:**

**Using SciPy**

**Numerical Method 2: Newton Raphson Method**

Newton Raphson technique requires calculation of the derivative of the function in question. First order derivative represented as **f’ **is computed that represents a tangential line. The approximation of the following value of x is given as:

**x* = x — f(x)/f’(x)**

The tangent line intersects the x axis as x* that results in y = 0. This can be considered like a Taylor series expansion about a certain point x* such that the new point x* = x + delta (x) gives

f(x* + delta(x)) = 0

The above process gets repeated until maximum iterations are completed or a result within tolerance threshold is reached.

A general diagram for this method is as below:

## Advantages of the Newton Raphson method:

a. It is faster compared to bisection method, as it involves a derivative

b. Well suited for smooth functions that are differentiable

## Shortcomings of the Newton Raphson method:

a. At times derivative of a function may be complicated to solve

b. It may not guarantee global convergence if there are multiple roots.

Python implementation of the Newton Raphson method:

**Using Pure Python:**

**Using SciPy**

**Numerical Method 3: Secant Method**

In this approach, secant lines are used to imply roots. Secant line means a line that intersects two points along the curve. This method is also known as quasi-Newton method. By successively drawing secant lines, we imply the roots.

A secant line is drawn as y = f(b) — f(a) / (b-a) * (c-b) + f(b)

We get a solution to c

On the next iteration, the value is updated and we continue with the root finding exercise until solution converges. Or when maximum number of iterations are reached.

A general diagram for this method is as below:

## Advantages of the Secant method:

a. It is faster compared to bisection method

b. Maybe faster because it requires computation of its function at each step

## Shortcomings of the Secant method:

a. Initial guess should be closer to the root in order to have convergence

Python implementation of the Secant method:

**Application to Finance**

Idea of roots is frequently used for following applications including:

a. Implying bond yields when prices are given

b. Numerical optimization

c. Implied vol surface computations , and many more applications

**Conclusion**

In this article we have demonstrated three techniques for numerical approximations. There are other techniques that may be explored by readers. Each numerical method has some advantages/shortcomings vis-à-vis the rest, so we need to choose the right method for getting optimized solutions of problems on industry projects. With good scripting tools like Python, implementing these numerical methods is like a walk in the park, however, its important to know the background logic so that we understand the features of various numerical methods before we use them for modelling purposes