Curriculum
Course: Computer Based Numerical and Statistical...
Login
Text lesson

Unit 1: Summary – Computer Based Numerical and Statistical Techniques

Introduction: Numbers and Their Accuracy

In numerical analysis, the way numbers are represented and manipulated within a computer has a profound effect on the accuracy and reliability of computations. Unlike the continuous nature of real numbers in mathematics, computers work with finite precision, which leads to several types of errors.

1. Number Representation and Computer Arithmetic

Computers represent numbers in binary form, using fixed or floating-point formats. A floating-point number has a format like:
± mantissa × base^exponent.

For example, 3.45 × 10^2 is a floating-point representation in base-10. In computers, binary base (base-2) is used.

Important parameters include:

·       Precision: The number of significant digits.

·       Range: The range of magnitudes a number can represent.

2. Types of Errors in Computation

Due to finite precision and round-off, various errors arise:

·       Round-off Error: Arises from storing only a limited number of digits.

·       Truncation Error: Occurs when an infinite process is approximated by a finite one (like using a few terms of a series).

·       Absolute Error (AE): |True Value – Approximate Value|

·       Relative Error (RE): |AE / True Value|

·       Percentage Error: RE × 100%

Understanding these errors is essential for designing reliable numerical algorithms.

3. General Error Formula

The general formula estimates how errors propagate through functions. For a function f(x) with an approximate value x + Δx:

f(x + Δx) ≈ f(x) + f’(x)·Δx

This first-order Taylor expansion helps estimate how an input error (Δx) affects the output.

4. Errors in Series Approximations

Mathematical functions like sin(x), cos(x), and e^x can be represented by Taylor or Maclaurin series. The approximation involves truncating the series after a few terms:

e^x ≈ 1 + x + x²/2! + x³/3! + … + x/n!

The remainder term or truncation error is:

R(x) = f⁽ⁿ⁺¹(ξ)·xⁿ⁺¹ / (n+1)!

where ξ lies between 0 and x. This provides a bound on the error and helps determine how many terms are needed for desired accuracy.

Solution of Algebraic and Transcendental Equations

Algebraic equations involve polynomials (e.g., x³ – x – 2 = 0), while transcendental equations contain non-algebraic functions like sin(x), log(x), or e^x (e.g., x = cos(x)). Finding exact roots is often impossible, so numerical methods are used.

1. Bisection Method

This is a bracketing method used when the function changes sign over an interval [a, b], i.e., f(a)·f(b) < 0.

Steps:

·       Compute midpoint c = (a + b)/2.

·       Replace the subinterval [a, b] with [a, c] or [c, b] depending on the sign change.

·       Repeat until the interval is small enough.

Pros: Guaranteed convergence, simple.
Cons: Slow, linear convergence.

2. Iteration Method (Fixed Point Iteration)

Given f(x) = 0, rearrange as x = g(x) and iterate:

xₙ₊₁ = g(x)

Convergence Criteria:

·       If |g’(x)| < 1, the method converges.

·       It is important to choose the correct form of g(x).

Pros: Simple and fast for suitable functions.
Cons: May diverge if function is not well-behaved.

3. Method of False Position (Regula Falsi)

Combines the bisection approach with linear interpolation. Let x and x be such that f(x)·f(x) < 0.

Root approximation:

x = x – [f(x)·(x – x)] / [f(x) – f(x)]

Repeat the process while adjusting the interval.

Pros: Faster than bisection.
Cons: Can stagnate (converges slowly in some cases).

4. Newton-Raphson Method

This powerful open method uses the function and its derivative:

xₙ₊₁ = x – f(x) / f’(x)

Conditions:

·       Function must be differentiable.

·       Initial guess must be close to the actual root.

Pros: Very fast (quadratic convergence).
Cons: Fails if
f’(x) is zero or near-zero.

5. Methods for Complex Roots

To find roots of polynomials with complex roots, standard real methods may not suffice. Durand-Kerner and Bairstow’s methods are commonly used.

In Bairstow’s method, a quadratic factor is iteratively approximated and divided from the polynomial.

Graffe’s Method (Root Squaring Method): Transforms a polynomial so that the magnitudes of the roots are separated by squaring:

·       Repeated squaring increases the difference in magnitude of roots.

·       Used to approximate complex and multiple roots.

6. Rate of Convergence

This measures how quickly an iterative method approaches the actual root.

For a method with root r, and sequence {x}, the error e = |x – r|.

·       Linear convergence: eₙ₊₁ ≈ C·e

·       Quadratic convergence: eₙ₊₁ ≈ C·(e

·       Super-linear: faster than linear, but not quadratic.

Newton-Raphson shows quadratic convergence under ideal conditions, while bisection has linear convergence.

7. Polynomial Equations

Solving P(x) = ax + axⁿ⁻¹ + … + a = 0 involves:

·       Using methods like synthetic division, Horner’s method, or factorization for lower degrees.

·       Iterative numerical methods for higher degrees.

Polynomial root-finding is essential in various applications like control systems, signal processing, etc.

Key Learning Outcomes Covered

Intuitive and Working Understanding

Students develop a working knowledge of how to apply numerical algorithms to solve real-world problems like root finding, with a clear grasp of error behavior and precision constraints.

Implementation Experience

Each method discussed can be implemented using programming languages like Python, C++, or MATLAB, encouraging students to translate mathematical logic into computer-executable code.

Example:

# Bisection Method in Python
def f(x): return x**3 – x – 2

def bisection(a, b, tol):
    while (b – a)/2 > tol:
        c = (a + b)/2
        if f(c) == 0:
            return c
        elif f(a)*f(c) < 0:
            b = c
        else:
            a = c
    return (a + b)/2

Tracing and Analyzing Errors

Understanding how errors accumulate, and how to predict them using Taylor expansions and relative error formulas, is crucial for numerical reliability. Students learn when and why an algorithm may fail or give inaccurate results.

Application of Statistical Methods

Numerical root-finding often supports statistical algorithms like regression and optimization, which are core to data science and machine learning. For instance, solving the least-squares problem involves iterative numerical solutions.

Demonstrating Real Applications

Each method is tied to practical problems:

·       Engineering design optimization.

·       Physics simulations (e.g., solving Kepler’s equation).

·       Economics (equilibrium computation).

·       Computer Graphics (intersection of curves).

Conclusion

By the end of this module, students should confidently:

·       Understand the mathematical and computational foundations of numerical methods.

·       Implement them on computers for solving complex real-world problems.

·       Identify, predict, and mitigate numerical errors.

·       Bridge numerical methods with statistical and analytical tasks in computational applications.

 

Scroll to Top